Support Questions

Find answers, ask questions, and share your expertise

Error during spark on yarn installation

Explorer

Hi,

 

When i'm trying to add "spark on yarn" service with Cloudera Manager 5.4.3, i have the following messages :

 

javax.persistence.PersistenceException:org.hibernate.exception.ConstraintViolationException: could not perform addBatch
at AbstractEntityManagerImpl.java line 1387
in org.hibernate.ejb.AbstractEntityManagerImpl convert()

 

Caused by: java.sql.BatchUpdateException:Batch entry 0 insert into SERVICES (OPTIMISTIC_LOCK_VERSION, NAME, DISPLAY_NAME, SERVICE_TYPE, MAINTENANCE_COUNT, GENERATION, CLUSTER_ID, SERVICE_ID) values ('0', 'spark_on_yarn', 'Spark', 'SPARK_ON_YARN', '0', '1', '2', '55') was aborted. Call getNextException to see the cause.
at AbstractJdbc2Statement.java line 2598
in org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler handleError()

 

Any idea ?

 

regards,

1 ACCEPTED SOLUTION

Explorer

works !

 

I have a old cdh version (5.2) upgrading to 5.4.4 last week.

 

In my db, spark use the following name :

 

scm=# select * from services;
 service_id | optimistic_lock_version |    name    | service_type | cluster_id | maintenance_count |        display_name         | generation
------------+-------------------------+------------+--------------+------------+-------------------+-----------------------------+------------
         38 |                      30 | spark      | SPARK        |          2 |                 0 | Spark                       |          1

 

After deleting spark standalone and installing spark on yarn, the table has been updated :

 

scm=# select * from services;
 service_id | optimistic_lock_version |     name      | service_type  | cluster_id | maintenance_count |        display_name         | generation
------------+-------------------------+---------------+---------------+------------+-------------------+-----------------------------+------------
         63 |                       3 | spark_on_yarn | SPARK_ON_YARN |          2 |                 0 | Spark                       |          1

 

regrads,

 

 

View solution in original post

2 REPLIES 2

Do you have the Standalone Spark service also present in the cluster? If so, please change the name of the existing SPARK service to say "SPARK standalone" before attempting to add SPARK on YARN using "Add A Service"
Regards,
Gautam Gopalakrishnan

Explorer

works !

 

I have a old cdh version (5.2) upgrading to 5.4.4 last week.

 

In my db, spark use the following name :

 

scm=# select * from services;
 service_id | optimistic_lock_version |    name    | service_type | cluster_id | maintenance_count |        display_name         | generation
------------+-------------------------+------------+--------------+------------+-------------------+-----------------------------+------------
         38 |                      30 | spark      | SPARK        |          2 |                 0 | Spark                       |          1

 

After deleting spark standalone and installing spark on yarn, the table has been updated :

 

scm=# select * from services;
 service_id | optimistic_lock_version |     name      | service_type  | cluster_id | maintenance_count |        display_name         | generation
------------+-------------------------+---------------+---------------+------------+-------------------+-----------------------------+------------
         63 |                       3 | spark_on_yarn | SPARK_ON_YARN |          2 |                 0 | Spark                       |          1

 

regrads,

 

 

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.