Member since
05-20-2015
13
Posts
2
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
10942 | 04-15-2016 05:44 AM | |
7708 | 03-11-2016 10:12 AM | |
6405 | 01-18-2016 06:45 AM | |
2148 | 07-31-2015 03:53 AM |
04-15-2016
05:44 AM
1 Kudo
Agh, should have been more obvious to me. Was a problem with the keystore and certificates associated with SSL implementation. Additionally, found more documentation specific to implementation, that was not noticed originally. Back up and running now.
... View more
03-20-2016
04:57 AM
I decided to try replacing the metadata with the old backup from the day prior to the kerberos implmentation. I then started the cluster. I get the same error upon startup, for namenode txid being slightly lower than expected (just different edit numbers). I went ahead and reverted this so I'm back to using the same metadata as described in the initial post above. At the same time, I can't seem to run any hadoop/hdfs commands to recover/check the system, because it is expecting namenode port to be open and connect to it.
... View more
03-19-2016
06:25 PM
Following changes to the cluster to implement Kerberos, I tried to start up the cluster. The message I get is: Failed to start namenode. org.apache.hadoop.hdfs.server.namenode.EditLogInputException: Error replaying edit log at offset 0. Expected transaction ID was 3290752 at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:199) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:139) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:829) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:684) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1061) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:765) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:589) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:818) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:797) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1561) Caused by: org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream$PrematureEOFException: got premature end-of-file at txid 3290751; expected file to go up to 3290785 at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:194) at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85) at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151) at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178) at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:186) ... 12 more Just before implementing Kerberos I did take a backup of the metadata on both namenodes, both of which are in HA. I packed the dfs.data.dir "/dfs/nn/" directory into a tar file. I did the same just now. Ok so I tried first to use hdfs fsck command, and variations of it fails: " sudo hadoop fsck / " gives me: DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 16/03/19 21:07:45 INFO retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over servername.domain.domain/xxx.xxx.xx.xxx:8020 after 1 fail over attempts. Trying to fail over after sleeping for 641ms. java.net.ConnectException: Call From servername.domain.domain/xxx.xxx.xx.xxx to servername.domain.domain:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731) at org.apache.hadoop.ipc.Client.call(Client.java:1476) at org.apache.hadoop.ipc.Client.call(Client.java:1403) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752) I then decide to try using "namenode -recover" 16/03/19 20:53:05 INFO namenode.NameNode: createNameNode [-recover] You have selected Metadata Recovery mode. This mode is intended to recover lost metadata on a corrupt filesystem. Metadata recovery mode often permanently deletes data from your HDFS filesystem. Please back up your edit log and fsimage before trying this! Are you ready to proceed? (Y/N) (Y or N) y 16/03/19 20:53:22 INFO namenode.MetaRecoveryContext: starting recovery... 16/03/19 20:53:22 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Be ware of data loss due to lack of redundant storage directories! 16/03/19 20:53:22 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) con figured. Beware of data loss due to lack of redundant storage directories! 16/03/19 20:53:22 INFO namenode.FSNamesystem: No KeyProvider found. 16/03/19 20:53:22 INFO namenode.FSNamesystem: fsLock is fair:true 16/03/19 20:53:22 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 16/03/19 20:53:22 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 16/03/19 20:53:22 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:0 0:00.000 16/03/19 20:53:22 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Mar 19 20:53:22 16/03/19 20:53:22 INFO util.GSet: Computing capacity for map BlocksMap 16/03/19 20:53:22 INFO util.GSet: VM type = 64-bit 16/03/19 20:53:22 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB 16/03/19 20:53:22 INFO util.GSet: capacity = 2^21 = 2097152 entries 16/03/19 20:53:22 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=true 16/03/19 20:53:22 INFO blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.acces s.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null 16/03/19 20:53:22 INFO blockmanagement.BlockManager: defaultReplication = 2 16/03/19 20:53:22 INFO blockmanagement.BlockManager: maxReplication = 512 16/03/19 20:53:22 INFO blockmanagement.BlockManager: minReplication = 1 16/03/19 20:53:22 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 16/03/19 20:53:22 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 16/03/19 20:53:22 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 16/03/19 20:53:22 INFO blockmanagement.BlockManager: encryptDataTransfer = false 16/03/19 20:53:22 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 16/03/19 20:53:22 INFO namenode.FSNamesystem: fsOwner = root (auth:KERBEROS) 16/03/19 20:53:22 INFO namenode.FSNamesystem: supergroup = supergroup 16/03/19 20:53:22 INFO namenode.FSNamesystem: isPermissionEnabled = true 16/03/19 20:53:22 INFO namenode.FSNamesystem: Determined nameservice ID: StandbyNameNode 16/03/19 20:53:22 INFO namenode.FSNamesystem: HA Enabled: true 16/03/19 20:53:22 INFO namenode.FSNamesystem: Append Enabled: true 16/03/19 20:53:22 INFO util.GSet: Computing capacity for map INodeMap 16/03/19 20:53:22 INFO util.GSet: VM type = 64-bit 16/03/19 20:53:22 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB 16/03/19 20:53:22 INFO util.GSet: capacity = 2^20 = 1048576 entries 16/03/19 20:53:22 INFO namenode.NameNode: Caching file names occuring more than 10 times 16/03/19 20:53:22 INFO util.GSet: Computing capacity for map cachedBlocks 16/03/19 20:53:22 INFO util.GSet: VM type = 64-bit 16/03/19 20:53:22 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB 16/03/19 20:53:22 INFO util.GSet: capacity = 2^18 = 262144 entries 16/03/19 20:53:22 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 16/03/19 20:53:22 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 16/03/19 20:53:22 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 16/03/19 20:53:22 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 16/03/19 20:53:22 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 16/03/19 20:53:22 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 16/03/19 20:53:22 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 16/03/19 20:53:22 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry ti me is 600000 millis 16/03/19 20:53:22 INFO util.GSet: Computing capacity for map NameNodeRetryCache 16/03/19 20:53:22 INFO util.GSet: VM type = 64-bit 16/03/19 20:53:22 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB 16/03/19 20:53:22 INFO util.GSet: capacity = 2^15 = 32768 entries 16/03/19 20:53:22 INFO namenode.NNConf: ACLs enabled? false 16/03/19 20:53:22 INFO namenode.NNConf: XAttrs enabled? true 16/03/19 20:53:22 INFO namenode.NNConf: Maximum size of an xattr: 16384 16/03/19 20:53:22 INFO hdfs.StateChange: STATE* Safe mode is ON. It was turned on manually. Use "hdfs dfsadmin -safemode leave" to turn safe mode off. 16/03/19 20:53:22 WARN common.Storage: Storage directory /tmp/hadoop-root/dfs/name does not exist 16/03/19 20:53:22 WARN namenode.FSNamesystem: Encountered exception loading fsimage org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-root/dfs/name is in an incon sistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:314) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1061) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:765) at org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1387) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1477) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1561) 16/03/19 20:53:22 INFO namenode.MetaRecoveryContext: RECOVERY FAILED: caught exception org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-root/dfs/name is in an incon sistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:314) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1061) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:765) at org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1387) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1477) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1561) 16/03/19 20:53:22 ERROR namenode.NameNode: Failed to start namenode. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-root/dfs/name is in an incon sistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:314) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1061) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:765) I'm looking for advice on how to proceed here!? I really do not know what to do at this point. Thanks for the help.
... View more
Labels:
- Labels:
-
HDFS
03-11-2016
10:12 AM
Had to rebuild sentry database too to get that started. That then solved the previous Hive canary error. Complicated yet interesting.
... View more
03-11-2016
09:07 AM
It seems I was able to at least partially fix metastore DB in postgres. Now it starts up a lot nicer according to logs. But now I get: "The Hive Metastore canary failed to drop the table it created"
... View more
03-11-2016
08:18 AM
I see a problem in Hive now - so perhaps this is where the issue lies. [main]: Query for candidates of org.apache.hadoop.hive.metastore.model.MVersionTable and subclasses resulted in no possible candidates Required table missing : ""VERSION"" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.autoCreateTables" org.datanucleus.store.rdbms.exceptions.MissingTableException: Required table missing : ""VERSION"" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.autoCreateTables" at org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.java:485) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:3380) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:3190) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2841) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:122) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:1605) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:954) at org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:679) at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getStatementForCandidates(RDBMSQueryUtils.java:408) at org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.java:947) at org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:370) at org.datanucleus.store.query.Query.executeQuery(Query.java:1744) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672) at org.datanucleus.store.query.Query.execute(Query.java:1654) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221) at org.apache.hadoop.hive.metastore.ObjectStore.getMSchemaVersion(ObjectStore.java:6957) at org.apache.hadoop.hive.metastore.ObjectStore.getMetaStoreSchemaVersion(ObjectStore.java:6941) at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:6899) at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:6883) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) at com.sun.proxy.$Proxy10.verifySchema(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:575) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:623) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:464) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5775) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5770) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6022) at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5947) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Mar 10, 8:51:19.864 AM ERROR org.apache.hadoop.hive.metastore.HiveMetaStore [main]: MetaException(message:Version information not found in metastore. ) at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:6902) at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:6883) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) at com.sun.proxy.$Proxy10.verifySchema(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:575) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:623) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:464) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5775) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5770) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6022) at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5947) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Mar 10, 8:51:19.864 AM ERROR org.apache.hadoop.hive.metastore.HiveMetaStore [main]: Metastore Thrift Server threw an exception... MetaException(message:Version information not found in metastore. ) at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:6902) at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:6883) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) at com.sun.proxy.$Proxy10.verifySchema(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:575) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:623) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:464) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5775) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5770) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6022) at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5947) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Mar 10, 8:55:20.286 AM INFO org.apache.hadoop.hive.metastore.HiveMetaStore [Thread-3]: Shutting down hive metastore. Connecting to postgres I cannot find the metastore DB. But when I shut down Hive ,and then use the Cloudera Manager option to generate the metastore, it succeeds with "already exists".
... View more
03-11-2016
07:11 AM
Updated with full input/output sudo sentry --command schema-tool --conffile /etc/sentry/conf/sentry-site.xml --dbType postgres --initSchema Sentry store connection URL: jdbc:derby:;databaseName=sentry_store_db;create=true Sentry store Connection Driver : org.apache.derby.jdbc.EmbeddedDriver Sentry store connection User: Sentry Starting sentry store schema initialization to 1.5.0 Initialization script sentry-postgres-1.5.0.sql Connecting to jdbc:derby:;databaseName=sentry_store_db;create=true Connected to: Apache Derby (version 10.10.2.0 - (1582446)) Driver: Apache Derby Embedded JDBC Driver (version 10.10.2.0 - (1582446)) Transaction isolation: TRANSACTION_REPEATABLE_READ Autocommit status: true Error: Syntax error: Encountered "START" at line 1, column 1. (state=42X01,code=30000) Closing: 0: jdbc:derby:;databaseName=sentry_store_db;create=true org.apache.sentry.SentryUserException: Schema initialization FAILED! Metastore state would be inconsistent !! *** Sentry schemaTool failed ***
... View more
03-10-2016
08:03 AM
Environment Cloudera Enterprise 5.5.1 HDFS, Hive, Impala, Sentry, Solr, Spark, YARN, Zookeeper I just installed and setup Hive, Impala, and Sentry altogether. Sentry was able to create postgres tables, but the service does not start. I get the following output when running the command: "sudo sentry --command schema-tool -conffile /etc/sentry/conf/sentry-file.xml --dbType postgres --initSchema" Error: Syntax error: Encountered "START" at line 1, column 1. (state=42X01,code=30000) Closing: 0: jdbc:derby:;databaseName=sentry_store_db;create=true org.apache.sentry.SentryUserException: Schema initialization failed! Metastore state would be inconsistent!! *** Sentry schemaTool failed *** I am having trouble finding any resources on this (some stuff related to Hive showing this error but not Sentry). Any ideas on next steps?
... View more
Labels:
01-18-2016
06:45 AM
Not sure what happened. Restart of entire cluster failed to fix the issue, however by restarting individual cluster management services, the problem resolved itself and it is currently looking good.
... View more
01-13-2016
04:30 AM
2016-01-12 13:24:58,093 INFO com.cloudera.cmon.firehose.CMONConfiguration: Config: file:/var/run/cloudera-scm-agent/process/788-cloudera-mgmt-ACTIVITYMONITOR/cmon.conf 2016-01-12 13:25:00,933 INFO com.cloudera.cmf.cdhclient.util.CDHUrlClassLoader: Detected that this program is running in a JAVA 1.7.0_67 JVM. CDH5 jars will be loaded from:lib/cdh5 2016-01-12 13:25:00,935 INFO com.cloudera.enterprise.ssl.SSLFactory: Using default java truststore for verification of server certificates in HTTPS communication. 2016-01-12 13:25:01,435 WARN com.cloudera.cmf.BasicScmProxy: Exception while getting fetch configDefaults hash: none java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) I am seeing this error after upgrading from CDH 5.3.3 to CDH 5.5.1. I was able to start the cluster no problem. It seems Cloudera Manager is not able to query any services to capture service state, logs, etc. Any ideas?
... View more
Labels: