Member since
07-21-2017
62
Posts
2
Kudos Received
0
Solutions
01-26-2021
01:39 PM
1 Kudo
We currently have 3 Datanodes with 9 disks of 1 TB space having 9 partition for each disk, we are planning to add 2 new Datanodes with 3 disks of 2 TB space. Do I need to configure the new datanodes to have 9 partitions like the old nodes or 6 partitions with 1 TB space. Please advice.
... View more
Labels:
12-09-2020
09:24 AM
Yes it is correct since I tested the DSN connection successfully.
... View more
12-07-2020
08:59 AM
@asish wrote: The location would be in odbc.ini I could not find the ODBC.ini file. Detail Error in Power BI: DataSource.Error: ODBC: ERROR [HY000] [Cloudera][Hardy] (35) Error from server: error code: '0' error message: 'MetaException(message:Got exception: org.apache.hadoop.hive.metastore.api.MetaException No such database row)'. Details: DataSourceKind=Odbc DataSourcePath=dsn=Hive Batch Prod OdbcErrors=[Table] And the below error in Hive Meta store: ERROR [pool-6-thread-91]: metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(215)) - Retrying HMSHandler after 2000 ms (attempt 5 of 10) with error: javax.jdo.JDOObjectNotFoundException: No such database row FailedObject:2488234[OID]org.apache.hadoop.hive.metastore.model.MTable at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:564) at org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:678) at org.datanucleus.state.StateManagerImpl.isLoaded(StateManagerImpl.java:2929) at org.apache.hadoop.hive.metastore.model.MTable.dnGetdatabase(MTable.java) at org.apache.hadoop.hive.metastore.model.MTable.getDatabase(MTable.java:224) at org.apache.hadoop.hive.metastore.ObjectStore.getTableMeta(ObjectStore.java:1825) at sun.reflect.GeneratedMethodAccessor100.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) at com.sun.proxy.$Proxy29.getTableMeta(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_meta(HiveMetaStore.java:3100) at sun.reflect.GeneratedMethodAccessor99.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at com.sun.proxy.$Proxy31.get_table_meta(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_meta.getResult(ThriftHiveMetastore.java:15679) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_meta.getResult(ThriftHiveMetastore.java:15663) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:111) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:107) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:119) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) NestedThrowablesStackTrace: No such database row org.datanucleus.exceptions.NucleusObjectNotFoundException: No such database row at org.datanucleus.store.rdbms.request.FetchRequest.execute(FetchRequest.java:348) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.fetchObject(RDBMSPersistenceHandler.java:319) at org.datanucleus.state.AbstractStateManager.loadFieldsFromDatastore(AbstractStateManager.java:1147) at org.datanucleus.state.StateManagerImpl.loadSpecifiedFields(StateManagerImpl.java:2564) at org.datanucleus.state.StateManagerImpl.isLoaded(StateManagerImpl.java:2918) at org.apache.hadoop.hive.metastore.model.MTable.dnGetdatabase(MTable.java) at org.apache.hadoop.hive.metastore.model.MTable.getDatabase(MTable.java:224) at org.apache.hadoop.hive.metastore.ObjectStore.getTableMeta(ObjectStore.java:1825) at sun.reflect.GeneratedMethodAccessor100.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at com.sun.proxy.$Proxy31.get_table_meta(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_meta.getResult(ThriftHiveMetastore.java:15679) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_meta.getResult(ThriftHiveMetastore.java:15663) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:111) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:107) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:119) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
... View more
12-07-2020
01:12 AM
Hdp version 3.1.0 Cloudera Odbc hive driver - latest Where should I check for driver logs? No Kerberos Kindly note I have a same set up in Dev environment and it works perfectly fine.
... View more
12-07-2020
12:38 AM
No I do not have issues connecting from beeline or any other took except Power Bi
... View more
12-04-2020
07:09 PM
I'm trying to connect to HIVE DB from PowerBI using Hive ODBC driver. I've successfully configured DSN and could connect to the server but when I try to list down Hive database in PowerBi I get the following error:'ODBC:ERROR [HY000] [Cloudera][Hardy] (35) Error from server: error code:'0' error message: 'MetaException(message:Got exception: org.apache.hadoop.hive.metastore.api.MetaException No such database row)'.' Can someone help.
... View more
Labels:
09-08-2020
10:41 AM
I've set the value to 32M, but still facing this connection issue between HMS and HS2 servers.
... View more
09-07-2020
11:28 AM
'max_allowed_packet' = 4194304 This issue started occurring after I set this property in my.cnf max_allowed_packet= 10M and restarted MySql server. HMS was stuck in sinking metrics and was not connecting to MySql hence HS2 and Interactive failed to connect. So I reverted the setting in my.cnf and HIVE started working for few hours but fails again.
... View more
09-06-2020
08:18 AM
I have 3 HMS and 3 HS2 with HA enabled, from yesterday HS2 and Hive Interactive fails to connect to HMS. In Debug mode, the HMS logs shows the below output ' DEBUG [HikariPool-1 housekeeper]: pool.HikariPool (HikariPool.java:logPoolState(390)) - HikariPool-1 - Pool stats (total=20, active=1, idle=19, waiting=0) DEBUG [HikariPool-2 housekeeper]: pool.HikariPool (HikariPool.java:logPoolState(390)) - HikariPool-2 - Pool stats (total=20, active=0, idle=20, waiting=0) DEBUG [Timer for 'hivemetastore' metrics system]: impl.MetricsSystemImpl (MetricsSystemImpl.java:snapshotMetrics(422)) - Snapshotted source UgiMetrics DEBUG [Timer for 'hivemetastore' metrics system]: impl.MetricsSystemImpl (MetricsSystemImpl.java:snapshotMetrics(422)) - Snapshotted source hivemetastore DEBUG [Timer for 'hivemetastore' metrics system]: impl.MetricsSystemImpl (MetricsSystemImpl.java:snapshotMetrics(422)) - Snapshotted source MetricsSystem,sub=Stats DEBUG [Timer for 'hivemetastore' metrics system]: impl.MetricsSinkAdapter (MetricsSinkAdapter.java:putMetrics(98)) - enqueue, logicalTime=190000 DEBUG [timeline]: impl.MetricsSinkAdapter (MetricsSinkAdapter.java:consume(181)) - Pushing record UgiMetrics.ugi.UgiMetrics to timeline DEBUG [timeline]: impl.MetricsSinkAdapter (MetricsSinkAdapter.java:consume(181)) - Pushing record hivemetastore.default.General to timeline DEBUG [timeline]: impl.MetricsSinkAdapter (MetricsSinkAdapter.java:consume(181)) - Pushing record MetricsSystem,sub=Stats.metricssystem.MetricsSystem to timeline DEBUG [timeline]: impl.MetricsSinkAdapter (MetricsSinkAdapter.java:consume(199)) - Done' 'Caused by: com.mysql.jdbc.PacketTooBigException: Packet for query is too large (6518628 > 4194304). You can change this value on the server by setting the max_allowed_packet' variable. at com.mysql.jdbc.MysqlIO.readPacket(MysqlIO.java:612) ~[mysql-connector-java.jar:?]' The HMS starts with no error but it is not listening on port 9083 and HS2 and Interactive fails to start. HDP -3.1.0 HIVE -3.0.0.3.1 MySql - 5.7 MySql connector - 5.1.25
... View more
Labels:
05-27-2020
10:13 AM
Hello All,
My Ranger DB is corrupted and I have no backup to restore it. Can I drop the corrupt db and recreate a new db for Ranger. This is for my Kerberized cluster in my test environment.
Any advice.
... View more
Labels:
- Labels:
-
Apache Ranger
-
Kerberos