Support Questions

Find answers, ask questions, and share your expertise
Announcements
Welcome to the upgraded Community! Read this blog to see What’s New!

Getting "500 status code received on POST method for API" in ambari while trying to install Kerberos

avatar
Contributor
  • OS: CentOS6.6 on all hosts
  • Ambari version 2.1 & HDP 2.3.0
  • Operation: Installing Kerberos With AD KDC
  • The error I get in Ambari, more precisely in step 3 and while performing Test Client:

500 status codereceived on POST method for API: /api/v1/clusters/HdpCluster/requests

Error message: Server Error
  • Ambari-server log:

23 Dec 2015 21:53:30,301 ERROR [pool-2-thread-1] AmbariJpaLocalTxnInterceptor:114 - [DETAILED ERROR] Rollback reason: Local Exception Stack:

Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: org.postgresql.util.PSQLException:

ERROR: duplicate key value violates unique constraint "uni_alert_group_name"

Error Code: 0

Call: INSERT INTO alert_group (group_id, cluster_id, group_name, is_default, service_name) VALUES (?, ?, ?, ?, ?) bind =>

[5 parameters bound] at

org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340) at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1611)

Any clue on how to solve this issue ?

1 ACCEPTED SOLUTION

avatar
Mentor

backup your Ambari database, go to your Ambari database, delete the old entry in the alert_group table that matches the error.

View solution in original post

15 REPLIES 15

avatar
Mentor

backup your Ambari database, go to your Ambari database, delete the old entry in the alert_group table that matches the error.

avatar
Contributor

@Artem Ervits Thank you, this solved part of the problem. However, now I am getting another error (the same 500 status code and still in the same step):

 24 Dec 2015 12:58:35,191  INFO [qtp-client-52] AmbariManagementControllerImpl:3230 - Received action execution request, clusterName=HdpCluster, request=isCommand :true, action :null, command :KERBEROS_SERVICE_CHECK, inputs :{}, resourceFilters: [RequestResourceFilter{serviceName='KERBEROS', componentName='null', hostNames=[]}], exclusive: false, clusterName :HdpCluster
24 Dec 2015 12:58:35,934 ERROR [qtp-client-52] BaseManagementHandler:66 - Caught a runtime exception while attempting to create a resource
java.lang.NullPointerException
        at org.apache.ambari.server.actionmanager.ActionDBAccessorImpl.persistActions(ActionDBAccessorImpl.java:265)
        at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:68)

I googled this error, and people propose to include the ambari server as a cluster member host (i.e. have an agent running side-by-side with the ambari server on the same machine). Is this really the unique solution ? I tried adding a new host in the ambari wizard (to add the ambari server itself to the list of hosts), then I got this error while registring the host ... :

INFO 2015-12-24 13:11:20,282 NetUtil.py:59 - Connecting to https://ambari-server.example.com:8440/ca
ERROR 2015-12-24 13:11:20,379 NetUtil.py:77 - [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
ERROR 2015-12-24 13:11:20,380 NetUtil.py:78 - SSLError: Failed to connect. Please check openssl library versions. 

I am running Oracle JDK on the ambari server:

cat /etc/ambari-server/conf/ambari.properties


java.home=/usr/jdk64/jdk1.7.0_67 
server.jdbc.postgres.schema=ambari 
jdk.name=jdk-7u67-linux-x64.tar.gz

Your help is much appreciated !

avatar
Mentor

also for the future, try to tag your posts with more general tags, so instead of ambari-2.1.0, tag with "Ambari" that way we have people following the Ambari topics notified of your questions.

avatar
Contributor

Ok ! Thank you

avatar
Mentor

is your server name really ambari-server.example.com? Confirm by going to /etc/ambari-agent/conf/ambari-agent.ini and check under ambari host property. Set it to whatever the node name is. You can get FQDN of the node by running "hostname -f".

avatar
Contributor

Actually, I changed it. My real FQDN is manager.cluster.mediatvcom

My FQDN is correct, it figures out in [server] section of amabri-agent.ini. While registring I can see this message:

WARNING 2015-12-24 15:24:52,405 NetUtil.py:105 - Server at https://manager.cluster.mediatvcom:8440 is not reachable, sleeping for 10 seconds...

The port 8440 is open, The ambari-agent is collocated with the ambari server, but while registering, the Confirm Hosts step always fails in the Add Host Wizard... Do you think it is related to SSL ? 😞

avatar
Mentor

try localhost instead? Also, check the version of openssl package on your OS and confirm with the Ambari version you're using in our guides. If other nodes connect successfully, confirm the openssl version on those hosts matches this host.

avatar
Contributor
rpm -qa | grep openssl
openssl-1.0.1e-42.el6.x86_64

I have this version running on all my hosts.

The documentation says: We should run OpenSSL (v1.01, build 16 or later). So it is fine since 42 > 16. So it should be fine 🙂

Yes all my nodes run successfully the ambari-agent.

I just upgraded the OpenSSL version running on the ambari server to openssl-1.0.1e-42.el6_7.1.x86_64, but still does not work...

I just figured out, in the last comment raised by @gaurav sharma

https://community.hortonworks.com/questions/145/op...

we are running into the same issue :(.

avatar
Mentor

it does say in the post you reference that user enabled Oracle JDK 1.8, have you tried that? Here's documentation to change JDK http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.0.0/bk_ambari_reference_guide/content/ch_changin...

avatar
Contributor

Thank you for the pointer.

Actually, I was running Oracle JDK 1.7 everywhere, hence according to the doc that should be fine.

On the ambari server I upgraded to JDK 1.8, but still does not work... May be I have to download and run oracle JDK on the rest of machines but I am wondering if this really has to be done since everything works fine for the nodes of the cluster... Anyway, I will give it a try and get back to you.

This is really confusing, please let me know if this should be solved out of the box.

avatar
Mentor

@Ali Gouta it would be asking too much to replace a working JDK just to troubleshoot one node. I don't think it's necessary, unfortunately I am not familiar with this error. Let's hope the bigger HCC community can chip in.

avatar
Contributor

Ok! At the moment, I will leave things as they are. Ultimately, I will open a new thread related to this specefic question if I run out of options.

Thank you so much for your guidance and support! So appreciated.

avatar
Contributor
@Artem Ervits

I am actually facing a new problem while trying to add te manager host (the one running the ambari server) to the list of cluster hosts. In step Review, and when I click on Deploy, I get this error:

1111-error-server.png

Looking at the ambari-server.log. I can see the following error:

28 Dec 2015 18:52:38,088  INFO [qtp-client-21] StackAdvisorRunner:61 - Stack-advisor output=/var/run/ambari-server/stack-recommendations/15/stackadvisor.out, error=/var/run/ambari-server/stack-recommendations/15/stackadvisor.err
28 Dec 2015 18:52:38,170  INFO [qtp-client-21] StackAdvisorRunner:69 - Stack advisor output files
28 Dec 2015 18:52:38,170  INFO [qtp-client-21] StackAdvisorRunner:70 -     advisor script stdout: StackAdvisor implementation for stack HDP, version 2.0.6 was loaded
StackAdvisor implementation for stack HDP, version 2.1 was loaded
StackAdvisor implementation for stack HDP, version 2.2 was loaded
StackAdvisor implementation for stack HDP, version 2.3 was loaded
Returning HDP23StackAdvisor implementation
28 Dec 2015 18:52:38,170  INFO [qtp-client-21] StackAdvisorRunner:71 -     advisor script stderr:
28 Dec 2015 18:52:44,781  WARN [qtp-client-21] ServletHandler:563 - /api/v1/clusters/HdpCluster/services
java.lang.RuntimeException: Trying to create a ServiceComponent not recognized in stack info, clusterName=HdpCluster, serviceName=HDFS, componentName=NFS_GATEWAY, stackInfo=HDP-2.2
        at org.apache.ambari.server.state.ServiceComponentImpl.<init>(ServiceComponentImpl.java:107)
        at org.apache.ambari.server.state.ServiceComponentImpl$$EnhancerByGuice$$685980cd.<init>(<generated>)

3 months ago, I upgraded my cluster from HDP 2.2 to HDP2.3, the error might be related to that action. Do you have any idea on how I can solve it ? Do you recommend me to open a new thread instead ?

avatar
Contributor

And it looks like NFS_Gateways does not exist on any host. From Ambari->HDFS, when I click on NFS_GATEWAYS, I don't see any host hosting it:

No hosts to display

Is there a risk of removing it with a curl DELETE call?

avatar
Mentor

it's always better to open a new thread for a separate issue, you have a better chance of getting an answer than risk having your issues buried in old replies. As far as the issue you're having, have you tried restarting Ambari and running services checks recently? If so, run the API commands to get a list of components. Clean up any reference to NFS_GATEWAY and run service checks again. You have some issues you need to work through @Ali Gouta.

Labels