Member since
06-07-2019
17
Posts
0
Kudos Received
0
Solutions
07-05-2019
12:47 AM
Also another thing i found out is that master server on hbase is not starting. Neve managed to start it.
... View more
07-04-2019
10:34 PM
Hi, Can you guide me how to check. I am using the default postgres database
... View more
07-04-2019
04:18 AM
Impala error: E0704 14:17:50.711091 60291 impala-server.cc:1564] There was an error processing the impalad catalog update. Requesting a full topic update to recover: RuntimeException: java.io.FileNotFoundException: /run/cloudera-scm-agent/process/117-impala-IMPALAD/hadoop-conf/core-site.xml (Permission denied) CAUSED BY: FileNotFoundException: /run/cloudera-scm-agent/process/117-impala-IMPALAD/hadoop-conf/core-site.xml (Permission denied) E0704 14:17:52.714336 60292 Configuration.java:2889] error parsing conf core-site.xml Java exception follows: java.io.FileNotFoundException: /run/cloudera-scm-agent/process/117-impala-IMPALAD/hadoop-conf/core-site.xml (Permission denied) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.<init>(FileInputStream.java:138) at java.io.FileInputStream.<init>(FileInputStream.java:93) at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90) at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188) at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2813) at org.apache.hadoop.conf.Configuration.getStreamReader(Configuration.java:2906) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2864) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2838) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2715) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1352) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1324) at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:518) at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:536) at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:430) at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:4061) at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:4024) at org.apache.impala.catalog.MetaStoreClientPool.<init>(MetaStoreClientPool.java:155) at org.apache.impala.catalog.Catalog.<init>(Catalog.java:99) at org.apache.impala.catalog.ImpaladCatalog.<init>(ImpaladCatalog.java:96) at org.apache.impala.service.FeCatalogManager$CatalogdImpl.createNewCatalog(FeCatalogManager.java:122) at org.apache.impala.service.FeCatalogManager$CatalogdImpl.updateCatalogCache(FeCatalogManager.java:109) at org.apache.impala.service.Frontend.updateCatalogCache(Frontend.java:308) at org.apache.impala.service.JniFrontend.updateCatalogCache(JniFrontend.java:187) E0704 14:17:52.714665 60292 impala-server.cc:1564] There was an error processing the impalad catalog update. Requesting a full topic update to recover: RuntimeException: java.io.FileNotFoundException: /run/cloudera-scm-agent/process/117-impala-IMPALAD/hadoop-conf/core-site.xml (Permission denied) CAUSED BY: FileNotFoundException: /run/cloudera-scm-agent/process/117-impala-IMPALAD/hadoop-conf/core-site.xml (Permission denied)
... View more
07-04-2019
04:16 AM
javax.jdo.JDODataStoreException: Error executing SQL query "select "DB_ID" from "DBS"". at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) ~[datanucleus-api-jdo-4.2.1.jar:?] at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:388) ~[datanucleus-api-jdo-4.2.1.jar:?] at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213) ~[datanucleus-api-jdo-4.2.1.jar:?] at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.runTestQuery(MetaStoreDirectSql.java:323) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:191) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:425) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:356) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:317) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77) [hadoop-common-3.0.0-cdh6.2.0.jar:?] at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137) [hadoop-common-3.0.0-cdh6.2.0.jar:?] at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:687) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:653) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:647) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:716) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:419) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:7028) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:7023) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:7281) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:7208) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_181] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_181] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.apache.hadoop.util.RunJar.run(RunJar.java:313) [hadoop-common-3.0.0-cdh6.2.0.jar:?] at org.apache.hadoop.util.RunJar.main(RunJar.java:227) [hadoop-common-3.0.0-cdh6.2.0.jar:?] Caused by: org.postgresql.util.PSQLException: ERROR: relation "DBS" does not exist Position: 21 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2477) ~[postgresql-42.1.4.jre7.89c9f79016bab67349a92c00c55907dd.jar:42.1.4.jre7] at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2190) ~[postgresql-42.1.4.jre7.89c9f79016bab67349a92c00c55907dd.jar:42.1.4.jre7] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:300) ~[postgresql-42.1.4.jre7.89c9f79016bab67349a92c00c55907dd.jar:42.1.4.jre7] at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:428) ~[postgresql-42.1.4.jre7.89c9f79016bab67349a92c00c55907dd.jar:42.1.4.jre7] at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:354) ~[postgresql-42.1.4.jre7.89c9f79016bab67349a92c00c55907dd.jar:42.1.4.jre7] at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:169) ~[postgresql-42.1.4.jre7.89c9f79016bab67349a92c00c55907dd.jar:42.1.4.jre7] at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:117) ~[postgresql-42.1.4.jre7.89c9f79016bab67349a92c00c55907dd.jar:42.1.4.jre7] at com.jolbox.bonecp.PreparedStatementHandle.executeQuery(PreparedStatementHandle.java:174) ~[bonecp-0.8.0.RELEASE.jar:?] at org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeQuery(ParamLoggingPreparedStatement.java:375) ~[datanucleus-rdbms-4.1.7.jar:?] at org.datanucleus.store.rdbms.SQLController.executeStatementQuery(SQLController.java:552) ~[datanucleus-rdbms-4.1.7.jar:?] at org.datanucleus.store.rdbms.query.SQLQuery.performExecute(SQLQuery.java:645) ~[datanucleus-rdbms-4.1.7.jar:?] at org.datanucleus.store.query.Query.executeQuery(Query.java:1844) ~[datanucleus-core-4.1.6.jar:?] at org.datanucleus.store.rdbms.query.SQLQuery.executeWithArray(SQLQuery.java:807) ~[datanucleus-rdbms-4.1.7.jar:?] at org.datanucleus.store.query.Query.execute(Query.java:1715) ~[datanucleus-core-4.1.6.jar:?] at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:371) ~[datanucleus-api-jdo-4.2.1.jar:?] ... 27 more 2019-07-02 16:35:26,855 ERROR org.apache.hadoop.hive.metastore.HiveMetaStore: [main]: MetaException(message:Version information not found in metastore. ) at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:8066) at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:8043) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) at com.sun.proxy.$Proxy23.verifySchema(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:654) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:647) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:716) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:419) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:7028) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:7023) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:7281) at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:7208) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:313) at org.apache.hadoop.util.RunJar.main(RunJar.java:227) 2019-07-02 16:35:26,856 ERROR org.apache.hadoop.hive.metastore.HiveMetaStore: [main]: Metastore Thrift Server threw an exception... org.apache.hadoop.hive.metastore.api.MetaException: Version information not found in metastore. at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:8066) ~[hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:8043) ~[hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_181] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_181] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) ~[hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at com.sun.proxy.$Proxy23.verifySchema(Unknown Source) ~[?:?] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:654) ~[hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:647) ~[hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:716) ~[hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:419) ~[hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) ~[hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) ~[hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:7028) ~[hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:7023) ~[hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:7281) ~[hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:7208) [hive-exec-2.1.1-cdh6.2.0.jar:2.1.1-cdh6.2.0] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_181] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_181] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.apache.hadoop.util.RunJar.run(RunJar.java:313) [hadoop-common-3.0.0-cdh6.2.0.jar:?] at org.apache.hadoop.util.RunJar.main(RunJar.java:227) [hadoop-common-3.0.0-cdh6.2.0.jar:?]
... View more
07-04-2019
04:13 AM
And here is the impala log: E0704 14:10:31.450600 60292 impala-server.cc:1564] There was an error processing the impalad catalog update. Requesting a full topic update to recover: RuntimeException: java.io.FileNotFoundException: /run/cloudera-scm-agent/process/117-impala-IMPALAD/hadoop-conf/core-site.xml (Permission denied) CAUSED BY: FileNotFoundException: /run/cloudera-scm-agent/process/117-impala-IMPALAD/hadoop-conf/core-site.xml (Permission denied) I0704 14:10:32.737851 59764 Frontend.java:1092] Waiting for local catalog to be initialized, attempt: 81988 E0704 14:10:33.453867 60291 Configuration.java:2889] error parsing conf core-site.xml Java exception follows: java.io.FileNotFoundException: /run/cloudera-scm-agent/process/117-impala-IMPALAD/hadoop-conf/core-site.xml (Permission denied) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.(FileInputStream.java:138) at java.io.FileInputStream.(FileInputStream.java:93) at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90) at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188) at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2813) at org.apache.hadoop.conf.Configuration.getStreamReader(Configuration.java:2906) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2864) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2838) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2715) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1352) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1324) at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:518) at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:536) at org.apache.hadoop.mapred.JobConf.(JobConf.java:430) at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:4061) at org.apache.hadoop.hive.conf.HiveConf.(HiveConf.java:4024) at org.apache.impala.catalog.MetaStoreClientPool.(MetaStoreClientPool.java:155) at org.apache.impala.catalog.Catalog.(Catalog.java:99) at org.apache.impala.catalog.ImpaladCatalog.(ImpaladCatalog.java:96) at org.apache.impala.service.FeCatalogManager$CatalogdImpl.createNewCatalog(FeCatalogManager.java:122) at org.apache.impala.service.FeCatalogManager$CatalogdImpl.updateCatalogCache(FeCatalogManager.java:109) at org.apache.impala.service.Frontend.updateCatalogCache(Frontend.java:308) at org.apache.impala.service.JniFrontend.updateCatalogCache(JniFrontend.java:187) I0704 14:10:33.454084 60291 jni-util.cc:256] java.lang.RuntimeException: java.io.FileNotFoundException: /run/cloudera-scm-agent/process/117-impala-IMPALAD/hadoop-conf/core-site.xml (Permission denied) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2890) at org.apache.hadoop.conf.Configurat Here are the permissions: -rw-r----- 1 cloudera-scm cloudera-scm 3350 Jul 2 16:36 /run/cloudera-scm-agent/process/117-impala-IMPALAD/hadoop-conf/core-site.xml
... View more
07-04-2019
01:55 AM
Hi,
Impala fails to start with this error:
Any idea?
java:107
Failed to connect to Hive MetaStore. Retrying.
Java exception follows:
java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1773)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:94)
at org.apache.impala.catalog.MetaStoreClientPool$MetaStoreClient.<init>(MetaStoreClientPool.java:99)
at org.apache.impala.catalog.MetaStoreClientPool$MetaStoreClient.<init>(MetaStoreClientPool.java:78)
at org.apache.impala.catalog.MetaStoreClientPool.initClients(MetaStoreClientPool.java:174)
at org.apache.impala.catalog.MetaStoreClientPool.<init>(MetaStoreClientPool.java:163)
at org.apache.impala.catalog.MetaStoreClientPool.<init>(MetaStoreClientPool.java:155)
at org.apache.impala.catalog.CatalogServiceCatalog.<init>(CatalogServiceCatalog.java:351)
at org.apache.impala.service.JniCatalog.<init>(JniCatalog.java:119)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1771)
... 11 more
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.thrift.transport.TSocket.open(TSocket.java:226)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:545)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:303)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1771)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:94)
at org.apache.impala.catalog.MetaStoreClientPool$MetaStoreClient.<init>(MetaStoreClientPool.java:99)
at org.apache.impala.catalog.MetaStoreClientPool$MetaStoreClient.<init>(MetaStoreClientPool.java:78)
at org.apache.impala.catalog.MetaStoreClientPool.initClients(MetaStoreClientPool.java:174)
at org.apache.impala.catalog.MetaStoreClientPool.<init>(MetaStoreClientPool.java:163)
at org.apache.impala.catalog.MetaStoreClientPool.<init>(MetaStoreClientPool.java:155)
at org.apache.impala.catalog.CatalogServiceCatalog.<init>(CatalogServiceCatalog.java:351)
at org.apache.impala.service.JniCatalog.<init>(JniCatalog.java:119)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.thrift.transport.TSocket.open(TSocket.java:221)
... 18 more
)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:594)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:303)
... View more
07-04-2019
01:44 AM
H i have removed everything and reinstalled again and passed distributing
... View more
06-26-2019
10:54 PM
I am just running the installation from the installer and then going throw wizard to create cluster and stack on distributing parcels
... View more
06-26-2019
10:30 AM
Hi guys, I have installed and configured cloudera manager, i decided to clean eveyrithing and install it again, and is failing when distributing parcels
... View more
Labels:
- Labels:
-
Cloudera Manager
06-19-2019
10:44 PM
How can i access the jira requested?
... View more
06-19-2019
10:46 AM
Hi my current CM is 6.2 with the latest cdh.i will have a look thanks.
... View more
06-18-2019
08:06 AM
Can you please advice how to do this?Is this a parameter in cloudera manager or parameter in sqoop import command? Thanks
... View more
06-18-2019
06:36 AM
Hi
I am trying to import a single table from sqoop and i get this error:
Warning: /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 19/06/18 16:25:25 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7-cdh6.2.0 19/06/18 16:25:26 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 19/06/18 16:25:26 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 19/06/18 16:25:26 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 19/06/18 16:25:26 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop will not process this sqoop connection, as an insufficient number of mappers are being used. 19/06/18 16:25:26 INFO manager.SqlManager: Using default fetchSize of 1000 19/06/18 16:25:26 INFO tool.CodeGenTool: Beginning code generation 19/06/18 16:25:26 INFO tool.CodeGenTool: Will generate java class as codegen_WORKFLOW 19/06/18 16:25:27 INFO manager.OracleManager: Time zone has been set to GMT 19/06/18 16:25:27 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM WORKFLOW t WHERE 1=0 19/06/18 16:25:27 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce 19/06/18 16:25:29 ERROR orm.CompilationManager: Could not rename /tmp/sqoop-cloudera/compile/e8c2761367830b3f0e903699f598700b/codegen_WORKFLOW.java to /home/cloudera/./codegen_WORKFLOW.java. Error: Destination '/home/cloudera/./codegen_WORKFLOW.java' already exists 19/06/18 16:25:29 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/e8c2761367830b3f0e903699f598700b/codegen_WORKFLOW.jar 19/06/18 16:25:29 INFO manager.OracleManager: Time zone has been set to GMT 19/06/18 16:25:29 WARN manager.OracleManager: The table WORKFLOW contains a multi-column primary key. Sqoop will default to the column IDWORKFLOW only for this job. 19/06/18 16:25:29 INFO manager.OracleManager: Time zone has been set to GMT 19/06/18 16:25:29 WARN manager.OracleManager: The table WORKFLOW contains a multi-column primary key. Sqoop will default to the column IDWORKFLOW only for this job. 19/06/18 16:25:29 INFO mapreduce.ImportJobBase: Beginning import of WORKFLOW 19/06/18 16:25:29 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 19/06/18 16:25:29 INFO manager.OracleManager: Time zone has been set to GMT 19/06/18 16:25:30 INFO manager.OracleManager: Time zone has been set to GMT 19/06/18 16:25:30 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM WORKFLOW t WHERE 1=0 19/06/18 16:25:30 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM WORKFLOW t WHERE 1=0 19/06/18 16:25:30 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 19/06/18 16:25:30 INFO client.RMProxy: Connecting to ResourceManager at clouderasrv/172.23.16.226:8032 19/06/18 16:25:31 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/cloudera/.staging/job_1560863992639_0002 19/06/18 16:26:23 INFO db.DBInputFormat: Using read commited transaction isolation 19/06/18 16:26:24 INFO mapreduce.JobSubmitter: number of splits:1 19/06/18 16:26:24 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled 19/06/18 16:26:25 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1560863992639_0002 19/06/18 16:26:25 INFO mapreduce.JobSubmitter: Executing with tokens: [] 19/06/18 16:26:25 INFO conf.Configuration: resource-types.xml not found 19/06/18 16:26:25 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 19/06/18 16:26:26 INFO impl.YarnClientImpl: Submitted application application_1560863992639_0002 19/06/18 16:26:26 INFO mapreduce.Job: The url to track the job: http://clouderasrv:8088/proxy/application_1560863992639_0002/ 19/06/18 16:26:26 INFO mapreduce.Job: Running job: job_1560863992639_0002 19/06/18 16:26:36 INFO mapreduce.Job: Job job_1560863992639_0002 running in uber mode : false 19/06/18 16:26:36 INFO mapreduce.Job: map 100% reduce 0% 19/06/18 16:26:37 INFO mapreduce.Job: Job job_1560863992639_0002 failed with state KILLED due to: The required MAP capability is more than the supported max container capability in the cluster. Killing the Job. mapResourceRequest: <memory:2560, vCores:1> maxContainerCapability:<memory:2048, vCores:4> Job received Kill while in RUNNING state.
19/06/18 16:26:37 INFO mapreduce.Job: Counters: 3 Job Counters Killed map tasks=1 Total time spent by all maps in occupied slots (ms)=0 Total time spent by all reduces in occupied slots (ms)=0 19/06/18 16:26:37 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead 19/06/18 16:26:37 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 66.9005 seconds (0 bytes/sec) 19/06/18 16:26:37 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 19/06/18 16:26:37 INFO mapreduce.ImportJobBase: Retrieved 0 records. 19/06/18 16:26:37 ERROR tool.ImportTool: Import failed: Import job failed!
Any idea why this error is coming up?
... View more
06-18-2019
02:18 AM
Hi, When trying to import table from oracle database to hive from hue qui i get the following error when i am trying to setup the jdbc connection:
Connection Failed: Error accessing the database: An error occurred while calling z:java.lang.Class.forName. : java.lang.ClassNotFoundException: oracle.jdbc.driver.OracleDriver at java.net.URLClassLoader$1.run(URLClassLoader.java:360) at java.net.URLClassLoader$1.run(URLClassLoader.java:349) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:348) at java.lang.ClassLoader.loadClass(ClassLoader.java:430) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:323) at java.lang.ClassLoader.loadClass(ClassLoader.java:363) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:195) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) at py4j.Gateway.invoke(Gateway.java:259) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:209) at java.lang.Thread.run(Thread.java:748)
By the way sqoop from command line is working just fine
... View more