Support Questions

Find answers, ask questions, and share your expertise

Getting "500 status code received on POST method for API" in ambari while trying to install Kerberos

avatar
Rising Star
  • OS: CentOS6.6 on all hosts
  • Ambari version 2.1 & HDP 2.3.0
  • Operation: Installing Kerberos With AD KDC
  • The error I get in Ambari, more precisely in step 3 and while performing Test Client:

500 status codereceived on POST method for API: /api/v1/clusters/HdpCluster/requests

Error message: Server Error
  • Ambari-server log:

23 Dec 2015 21:53:30,301 ERROR [pool-2-thread-1] AmbariJpaLocalTxnInterceptor:114 - [DETAILED ERROR] Rollback reason: Local Exception Stack:

Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: org.postgresql.util.PSQLException:

ERROR: duplicate key value violates unique constraint "uni_alert_group_name"

Error Code: 0

Call: INSERT INTO alert_group (group_id, cluster_id, group_name, is_default, service_name) VALUES (?, ?, ?, ?, ?) bind =>

[5 parameters bound] at

org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340) at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1611)

Any clue on how to solve this issue ?

1 ACCEPTED SOLUTION

avatar
Master Mentor

backup your Ambari database, go to your Ambari database, delete the old entry in the alert_group table that matches the error.

View solution in original post

15 REPLIES 15

avatar
Rising Star

Thank you for the pointer.

Actually, I was running Oracle JDK 1.7 everywhere, hence according to the doc that should be fine.

On the ambari server I upgraded to JDK 1.8, but still does not work... May be I have to download and run oracle JDK on the rest of machines but I am wondering if this really has to be done since everything works fine for the nodes of the cluster... Anyway, I will give it a try and get back to you.

This is really confusing, please let me know if this should be solved out of the box.

avatar
Master Mentor

@Ali Gouta it would be asking too much to replace a working JDK just to troubleshoot one node. I don't think it's necessary, unfortunately I am not familiar with this error. Let's hope the bigger HCC community can chip in.

avatar
Rising Star

Ok! At the moment, I will leave things as they are. Ultimately, I will open a new thread related to this specefic question if I run out of options.

Thank you so much for your guidance and support! So appreciated.

avatar
Rising Star
@Artem Ervits

I am actually facing a new problem while trying to add te manager host (the one running the ambari server) to the list of cluster hosts. In step Review, and when I click on Deploy, I get this error:

1111-error-server.png

Looking at the ambari-server.log. I can see the following error:

28 Dec 2015 18:52:38,088  INFO [qtp-client-21] StackAdvisorRunner:61 - Stack-advisor output=/var/run/ambari-server/stack-recommendations/15/stackadvisor.out, error=/var/run/ambari-server/stack-recommendations/15/stackadvisor.err
28 Dec 2015 18:52:38,170  INFO [qtp-client-21] StackAdvisorRunner:69 - Stack advisor output files
28 Dec 2015 18:52:38,170  INFO [qtp-client-21] StackAdvisorRunner:70 -     advisor script stdout: StackAdvisor implementation for stack HDP, version 2.0.6 was loaded
StackAdvisor implementation for stack HDP, version 2.1 was loaded
StackAdvisor implementation for stack HDP, version 2.2 was loaded
StackAdvisor implementation for stack HDP, version 2.3 was loaded
Returning HDP23StackAdvisor implementation
28 Dec 2015 18:52:38,170  INFO [qtp-client-21] StackAdvisorRunner:71 -     advisor script stderr:
28 Dec 2015 18:52:44,781  WARN [qtp-client-21] ServletHandler:563 - /api/v1/clusters/HdpCluster/services
java.lang.RuntimeException: Trying to create a ServiceComponent not recognized in stack info, clusterName=HdpCluster, serviceName=HDFS, componentName=NFS_GATEWAY, stackInfo=HDP-2.2
        at org.apache.ambari.server.state.ServiceComponentImpl.<init>(ServiceComponentImpl.java:107)
        at org.apache.ambari.server.state.ServiceComponentImpl$$EnhancerByGuice$$685980cd.<init>(<generated>)

3 months ago, I upgraded my cluster from HDP 2.2 to HDP2.3, the error might be related to that action. Do you have any idea on how I can solve it ? Do you recommend me to open a new thread instead ?

avatar
Rising Star

And it looks like NFS_Gateways does not exist on any host. From Ambari->HDFS, when I click on NFS_GATEWAYS, I don't see any host hosting it:

No hosts to display

Is there a risk of removing it with a curl DELETE call?

avatar
Master Mentor

it's always better to open a new thread for a separate issue, you have a better chance of getting an answer than risk having your issues buried in old replies. As far as the issue you're having, have you tried restarting Ambari and running services checks recently? If so, run the API commands to get a list of components. Clean up any reference to NFS_GATEWAY and run service checks again. You have some issues you need to work through @Ali Gouta.