Support Questions

Find answers, ask questions, and share your expertise

S020 Data storage error

avatar
Expert Contributor

How to fix it. Getting this in Hive View. Looked at the similar posting but not very sure what actually needs to be done. Recently upgraded from Ambari 2.2 to 2.5

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Prakash Punj

Looks like recently you have migrated some "Saved Queries" from Hive View 1.0 to Hive View 1.5

If we do the migration twice then we might see the same error when we next time try to run any query:

Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "ds_jobimpl_206_pkey"  Detail: Key (ds_id)=(52) already exists.
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)

.

As a workaround you can try the following:

1. Take a DB dump of your Ambari Database.

2. Find the Max Sequence ID in the "ambari_sequences" table of ambari Databse.

# psql -U ambari ambari
Password for user ambari: 
psql (9.2.18)
Type "help" for help.
ambari=> select * from ambari_sequences;

.


You will find a sequence ID as "ds_jobimpl_206_id_seq" you will need to increase it's value to a high value to avoid the "duplicate key value violates unique constraint" issue.

So update the value for this as following: (setting a very high value 12000 intentionally)

UPDATE ambari_sequences set sequence_value=12000 WHERE sequence_name='ds_jobimpl_206_id_seq';

Then restart ambari server

.

It looks related to the BUG: https://issues.apache.org/jira/browse/AMBARI-21977 (not resolved yet so we should try the workaround for now)

View solution in original post

17 REPLIES 17

avatar
Master Mentor

@Prakash Punj

"S020 Data storage error" is a generic error, Hence in order to findout the actual cause of failure, we will need to look at the detailed stacktrace of this error.

So can you please check and share the "Hive View" logs and also the ambari-server.log.

avatar
Expert Contributor

Hive view log: This is the last 50 lines of text from the recent hiveserver2.log.2017-10-01

er-Pool: Thread-40
2017-10-01 23:54:23,382 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.HiveSessionImpl (HiveSessionImpl.java:release(309)) - We are resetting the hadoop caller context for thread HiveServer2-Handler-Pool: Thread-40
2017-10-01 23:54:23,401 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.HiveSessionImpl (HiveSessionImpl.java:acquire(295)) - We are setting the hadoop caller context to be353dbf-a100-4c3b-85a9-f2e1f707e423 for thread HiveServer2-Handler-Pool: Thread-40
2017-10-01 23:54:23,401 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.HiveSessionImpl (HiveSessionImpl.java:release(309)) - We are resetting the hadoop caller context for thread HiveServer2-Handler-Pool: Thread-40
2017-10-01 23:54:23,450 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.HiveSessionImpl (HiveSessionImpl.java:acquire(295)) - We are setting the hadoop caller context to be353dbf-a100-4c3b-85a9-f2e1f707e423 for thread HiveServer2-Handler-Pool: Thread-40
2017-10-01 23:54:23,450 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.HiveSessionImpl (HiveSessionImpl.java:acquire(295)) - We are setting the hadoop caller context to be353dbf-a100-4c3b-85a9-f2e1f707e423 for thread HiveServer2-Handler-Pool: Thread-40
2017-10-01 23:54:23,476 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.HiveSessionImpl (HiveSessionImpl.java:release(309)) - We are resetting the hadoop caller context for thread HiveServer2-Handler-Pool: Thread-40
2017-10-01 23:57:23,693 INFO  [HiveServer2-Handler-Pool: Thread-40]: thrift.ThriftCLIService (ThriftCLIService.java:OpenSession(294)) - Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V8
2017-10-01 23:57:23,696 INFO  [HiveServer2-Handler-Pool: Thread-40]: metastore.ObjectStore (ObjectStore.java:initialize(294)) - ObjectStore, initialize called
2017-10-01 23:57:23,726 INFO  [HiveServer2-Handler-Pool: Thread-40]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:<init>(140)) - Using direct SQL, underlying DB is MYSQL
2017-10-01 23:57:23,727 INFO  [HiveServer2-Handler-Pool: Thread-40]: metastore.ObjectStore (ObjectStore.java:setConf(277)) - Initialized ObjectStore
2017-10-01 23:57:23,858 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.SessionState (SessionState.java:createPath(613)) - Created local directory: /tmp/94104db2-4ff2-40b1-97eb-e6056fd7e376_resources
2017-10-01 23:57:23,876 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.SessionState (SessionState.java:createPath(613)) - Created HDFS directory: /tmp/hive/anonymous/94104db2-4ff2-40b1-97eb-e6056fd7e376
2017-10-01 23:57:23,883 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.SessionState (SessionState.java:createPath(613)) - Created local directory: /tmp/hive/94104db2-4ff2-40b1-97eb-e6056fd7e376
2017-10-01 23:57:23,905 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.SessionState (SessionState.java:createPath(613)) - Created HDFS directory: /tmp/hive/anonymous/94104db2-4ff2-40b1-97eb-e6056fd7e376/_tmp_space.db
2017-10-01 23:57:23,908 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.HiveSessionImpl (HiveSessionImpl.java:setOperationLogSessionDir(248)) - Operation log session directory is created: /tmp/hive/operation_logs/94104db2-4ff2-40b1-97eb-e6056fd7e376
2017-10-01 23:57:24,151 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.HiveSessionImpl (HiveSessionImpl.java:acquire(295)) - We are setting the hadoop caller context to 94104db2-4ff2-40b1-97eb-e6056fd7e376 for thread HiveServer2-Handler-Pool: Thread-40
2017-10-01 23:57:24,152 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.HiveSessionImpl (HiveSessionImpl.java:release(309)) - We are resetting the hadoop caller context for thread HiveServer2-Handler-Pool: Thread-40
2017-10-01 23:57:24,191 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.HiveSessionImpl (HiveSessionImpl.java:acquire(295)) - We are setting the hadoop caller context to 94104db2-4ff2-40b1-97eb-e6056fd7e376 for thread HiveServer2-Handler-Pool: Thread-40
2017-10-01 23:57:24,192 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.HiveSessionImpl (HiveSessionImpl.java:release(309)) - We are resetting the hadoop caller context for thread HiveServer2-Handler-Pool: Thread-40
2017-10-01 23:57:24,240 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.HiveSessionImpl (HiveSessionImpl.java:acquire(295)) - We are setting the hadoop caller context to 94104db2-4ff2-40b1-97eb-e6056fd7e376 for thread HiveServer2-Handler-Pool: Thread-40
2017-10-01 23:57:24,240 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.HiveSessionImpl (HiveSessionImpl.java:acquire(295)) - We are setting the hadoop caller context to 94104db2-4ff2-40b1-97eb-e6056fd7e376 for thread HiveServer2-Handler-Pool: Thread-40
2017-10-01 23:57:24,284 INFO  [HiveServer2-Handler-Pool: Thread-40]: session.HiveSessionImpl (HiveSessionImpl.java:release(309)) - We are resetting the hadoop caller context for thread HiveServer2-Handler-Pool: Thread-40

avatar
Expert Contributor

@Jay SenSharma

This is the last 50 lines from ambari-server log

[centos@hdp-m:/var/log/ambari-server ] $ tail -50 ambari-server.log
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
        at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:212)
        at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:201)
        at org.apache.ambari.server.controller.AmbariHandlerList.handle(AmbariHandlerList.java:139)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
        at org.eclipse.jetty.server.Server.handle(Server.java:370)
        at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
        at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:984)
        at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1045)
        at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
        at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:236)
        at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
        at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
        at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
        at java.lang.Thread.run(Thread.java:745)
01 Oct 2017 23:20:07,110  INFO [pool-17-thread-1] MetricSinkWriteShardHostnameHashingStrategy:42 - Calculated collector shard hdp-m.asotc based on hostname: hdp-m.asotc
02 Oct 2017 00:30:10,487  INFO [pool-17-thread-1] MetricSinkWriteShardHostnameHashingStrategy:42 - Calculated collector shard hdp-m.asotc based on hostname: hdp-m.asotc
02 Oct 2017 01:40:14,577  INFO [pool-17-thread-1] MetricSinkWriteShardHostnameHashingStrategy:42 - Calculated collector shard hdp-m.asotc based on hostname: hdp-m.asotc
02 Oct 2017 02:50:18,344  INFO [pool-17-thread-1] MetricSinkWriteShardHostnameHashingStrategy:42 - Calculated






avatar
Contributor

@Prakash Punj,

Can you check if any of the filesystem got full / any of your disks become bad.

avatar
Expert Contributor

@kalai selvan.

I don't think space is a problem.

[centos@hdp-m:/usr/hdp/ssl ] $ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1       160G   73G   88G  46% /
devtmpfs        7.8G     0  7.8G   0% /dev
tmpfs           7.8G   12K  7.8G   1% /dev/shm
tmpfs           7.8G  788M  7.0G  10% /run
tmpfs           7.8G     0  7.8G   0% /sys/fs/cgroup
tmpfs           1.6G     0  1.6G   0% /run/user/1000
tmpfs           1.6G     0  1.6G   0% /run/user/1011
tmpfs           1.6G     0  1.6G   0% /run/user/1003
tmpfs           1.6G     0  1.6G   0% /run/user/1008
tmpfs           1.6G     0  1.6G   0% /run/user/0


avatar
Master Mentor

@Prakash Punj

The following error stack trace is not complete, We need to see from where this error begins. So can you please share the complete stackTrace of the following thread stack:

[centos@hdp-m:/var/log/ambari-server ] $ tail -50 ambari-server.log
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)

.

avatar
Expert Contributor

@Jay SenSharma

Sending you last 2 days of log trace of ambari-server.log file. It's a zip fileambari.zip

avatar
Expert Contributor

@Jay SenSharma

Any update on this ?

Thanks

avatar
Master Mentor

@Prakash Punj

Unfortunately in the "ambari.zip" i did not see any where the error that you mentioned "S020 Data storage error"

$ grep 'S020' ~/Downloads/ambari.txt

.