Member since
08-19-2019
150
Posts
1
Kudos Received
0
Solutions
05-19-2022
02:47 PM
i got this error when enable the hbase backup as below on hbase-site.xml <property> <name>hbase.backup.enable</name> <value>true</value> </property> <property> <name>hbase.master.logcleaner.plugins</name> <value>org.apache.hadoop.hbase.backup.master.BackupLogCleaner,...</value> </property> <property> <name>hbase.procedure.master.classes</name> <value>org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager,...</value> </property> <property> <name>hbase.procedure.regionserver.classes</name> <value>org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager,...</value> </property> <property> <name>hbase.coprocessor.region.classes</name> <value>org.apache.hadoop.hbase.backup.BackupObserver,...</value> </property> <property> <name>hbase.master.hfilecleaner.plugins</name> <value>org.apache.hadoop.hbase.backup.BackupHFileCleaner,...</value> </property> <property> <name>hbase.cluster.distributed</name> <value>false</value> </property> <property> <name>hbase.tmp.dir</name> <value>./tmp</value> </property> <property> <name>hbase.unsafe.stream.capability.enforce</name> <value>false</value> </property> </configuration>
... View more
12-14-2020
02:37 AM
echo "scan 'emp'" | $HBASE_HOME/bin/hbase shell | awk -F'=' '{print $2}' | awk -F ':' '{print $2}'|awk -F ',' '{print $1}'
... View more
12-11-2020
06:09 AM
Hello @Manoj690 Thanks for contacting Cloudera Community. While taking a Full Backup, you are facing IOException while waiting on the Lock. Kindly share the Output of Command " hbase backup history" along with "list_locks" from HBase Shell. The requested details would confirm the status of any running Backup & Locks placed on the Tables. Additionally, Share the HBase Version wherein you are using the required Backup Command. - Smarak
... View more
12-08-2020
03:47 AM
backup command running with hbase super user hbase backup create full hdfs://hostname:port/backup -t table_name restore command hbase restore hdfs://hostname:port/backup -t table_name we did this on same cluster
... View more
11-30-2020
09:28 PM
hi I also got same issue. did you find any solution for that
... View more
11-22-2020
09:28 PM
Hello @Manoj690 RegionServer is a Service & your team can add the RegionServer Service interactively using via Ambari (HDP) or Cloudera Manager (CDH or CDP). - Smarak
... View more
07-23-2020
02:38 AM
Can we delete kafka consumer group data? not the consumer group need to delete group data?
... View more
05-14-2020
10:45 PM
Did you resolve the issue. what are the steps you follow. Help me with the steps
... View more
05-06-2020
02:22 AM
I am getting this error in log file 2020-05-06 04:47:53,605 INFO [AlertNoticeDispatchService RUNNING] AlertNoticeDispatchService:279 - There are 28 pending alert notices about to be dispatched... 2020-05-06 04:47:53,627 INFO [alert-dispatch-27] EmailDispatcher:94 - Sending email: Notification{ type=ALERT, subject=Alert Summary: OK[14], Warning[0], Critical[0], Unknown[0]} 2020-05-06 04:48:03,962 ERROR [alert-dispatch-27] EmailDispatcher:172 - Unable to dispatch notification via Email javax.mail.MessagingException: Could not connect to SMTP host: smtp.gmail.com, port: 465, response: -1 at com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:2041) at com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:697) at javax.mail.Service.connect(Service.java:386) at javax.mail.Service.connect(Service.java:245) at javax.mail.Service.connect(Service.java:194) at javax.mail.Transport.send0(Transport.java:253) at javax.mail.Transport.send(Transport.java:124) at org.apache.ambari.server.notifications.dispatchers.EmailDispatcher.dispatch(EmailDispatcher.java:160) at org.apache.ambari.server.notifications.DispatchRunnable.run(DispatchRunnable.java:58) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
... View more
04-17-2020
06:46 AM
Hey @Manoj690, Thanks for reaching out to the Cloudera community. You can execute a PUT request using the mentioned path "/connectors/<Connector_name>/config" to update the configuration for an existing connector. Also, pass a JSON object with the update parameter/s in the PUT request. Example request: PUT /connectors/<Connector_name>/config Accept: application/json { "flush.size": "100", "rotate.interval.ms": "1000" } Let me know if this helps.
... View more
02-26-2020
04:51 AM
Hi goto cd /kafka-logs under the kafka-logs goto vi meta.properties in that change broker.id=1001 to 1 then restart the kafka
... View more
01-31-2020
05:58 AM
1 Kudo
@Manoj690 It's always a good idea to share the HDP and Zk version plus the zk logs in /var/log/* having said that can you share your zoo.cfg ? If you really need enable all four letter word commands by default, you can use the asterisk option so you don't have to include every command one by one in the list.See below 4lw.commands.whitelist=* As you have not shared your logs that's a starting point, then restart your zookeeper and let me know!
... View more
01-13-2020
08:57 PM
Hey Lewis This is Kafka installation on ambari. but i need kafka connect on ambari.
... View more
12-03-2019
01:34 AM
1 Kudo
@Manoj690 The error you are encountering is during the Ambari UI configuration of hivemetastore. I think in the Ambari UI you chose the New MYSQL instead of Existing MySQL Did you pre-create the following databases Hive Oozie, Ranger Rangerkms In your previous thread you already did the below steps to resolve the ambari startup issue. # ambari-server setup --jdbc-db=mysql --jdbc-driver=/path/to/mysql/mysql-connector-java.jar That above ensures and confirms that your MySQL database is well configured. Now you have the option to pre-create on create the other databases before or during the setup I al^ways prefer the latter, so I will advise you to open up the Linux CLI and follow the below steps for simplicity I have used a simple password! Always harden your password in production The assumption here is that hostname = gaian-lap386.com password = welcome1 username and password = hive # mysql -u root -pwelcome1 CREATE USER 'hive'@'localhost' IDENTIFIED BY 'hive'; GRANT ALL PRIVILEGES ON *.* TO '[HIVE_USER]'@'localhost'; CREATE USER 'hive'@'%' IDENTIFIED BY 'hive'; GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%'; CREATE USER 'hive'@'gaian-lap386.com' IDENTIFIED BY 'hive'; GRANT ALL PRIVILEGES ON *.* TO 'hive'@'gaian-lap386.com'; FLUSH PRIVILEGES; The above will create a hive database name hive with user and password [hive], you can use the same script above to create the other databases by replacing hive with e.g oozie,ranger,rangerkms The values above will appear in the Ambari UI when you get to the Hive/Ranger/oozie/rangerkms database config see screenshot, most will be prefilled you will need to test the connection to validate the values are correct if no readjust I have never found an issue on this part Just do the same of all the other databases.
... View more
12-01-2019
09:32 PM
2019-12-02 10:57:06,580 ERROR [main] AmbariServer:1114 - Failed to run the Ambari Server javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Connections could not be acquired from the underlying database! Error Code: 0 at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:815) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.getAbstractSession(EntityManagerFactoryDelegate.java:205) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.createEntityManagerImpl(EntityManagerFactoryDelegate.java:305) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:337) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:303) at com.google.inject.persist.jpa.JpaPersistService.begin(JpaPersistService.java:77) at com.google.inject.persist.jpa.AmbariJpaPersistService.begin(AmbariJpaPersistService.java:28) at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:40) at org.apache.ambari.server.checks.DatabaseConsistencyCheckHelper.checkDBVersionCompatible(DatabaseConsistencyCheckHelper.java:222) at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:1099) Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Connections could not be acquired from the underlying database! Error Code: 0 at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:316) at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:147) at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:162) at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.setOrDetectDatasource(DatabaseSessionImpl.java:207) at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:760) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:265) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:731) ... 9 more Caused by: java.sql.SQLException: Connections could not be acquired from the underlying database! at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:118) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:692) at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:146) at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:144) ... 14 more Caused by: com.mchange.v2.resourcepool.CannotAcquireResourceException: A ResourcePool could not acquire a resource from its primary factory or source. at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1469) at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:644) at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:554) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutAndMarkConnectionInUse(C3P0PooledConnectionPool.java:758) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:685) ... 16 more Caused by: java.sql.SQLException: Access denied for user 'ambari'@'xxxxxxxxx' (using password: YES) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:959) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3870) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3806) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:871) at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1686) at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1207) at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2254) at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2285) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2084) at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:795) at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:44) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:404) at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:400) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:327) at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:175) at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:220) at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:206) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:203) at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1138) at com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1125) at com.mchange.v2.resourcepool.BasicResourcePool.access$700(BasicResourcePool.java:44) at com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1870) at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:696)
... View more
11-28-2019
11:05 PM
It gets error because i already created this user. mysql> CREATE USER 'ambari'@'%' IDENTIFIED BY 'xxxxxxx'; ERROR 1396 (HY000): Operation CREATE USER failed for 'ambari'@'%' mysql> GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%'; Query OK, 0 rows affected (0.00 sec) mysql> CREATE USER 'ambari'@'localhost' IDENTIFIED BY 'xxxxxx'; ERROR 1396 (HY000): Operation CREATE USER failed for 'ambari'@'localhost' mysql> GRANT ALL PRIVILEGES ON *.* TO 'xxxxxx'@'localhost'; ERROR 1819 (HY000): Your password does not satisfy the current policy requirements mysql> CREATE USER 'ambari'@'xxxxxx' IDENTIFIED BY 'xxxxxx'; ERROR 1396 (HY000): Operation CREATE USER failed for 'ambari'@'xxxxxxx' mysql> GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'xxxxxxx'; Query OK, 0 rows affected (0.00 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec) mysql> CREATE USER 'ambari'@'%' IDENTIFIED BY 'xxxxxxxx'; ERROR 1396 (HY000): Operation CREATE USER failed for 'ambari'@'%' mysql> GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%'; Query OK, 0 rows affected (0.00 sec) mysql> mysql> mysql> GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'localhost'; Query OK, 0 rows affected (0.00 sec) mysql> CREATE USER 'ambari'@'xxxxx' IDENTIFIED BY 'xxxx'; ERROR 1396 (HY000): Operation CREATE USER failed for 'ambari'@'xxxxxx' mysql> GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'xxxxxxxx'; Query OK, 0 rows affected (0.00 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.01 sec)
... View more
11-27-2019
05:31 AM
@Manoj690 I have removed the other post from view for you. Please reach out via private message if you would like to discuss further so we can keep this thread focused on your issue.
... View more
11-26-2019
03:48 AM
@Manoj690 MY first suspicion is that Ambari MySQL database on host jdbc:mysql://gaxxxn-xxx386.com/ambari1 ambari1 is not running. Can you check by running # ambari-server status If the output is stopped then just restart it # ambari-server start Please revert I see you posted another thread concerning the hiveserve2 and ambari server https://community.cloudera.com/t5/Support-Questions/hive-server-2-error/td-p/284042
... View more
11-19-2019
02:54 AM
Hbase is starting successfully but in ambari still shows stopped.
... View more
11-18-2019
02:24 AM
Hi@Manoj690 , Seems your AMS hbase master is not able to start - Please try below steps - In the Ambari Dashboard, go to the 'Ambari Metrics' section and under the 'Service Actions' dropdown click 'Stop'. Check and confirm from backend that ams process is stopped. If the process is still running use below command to stop - # ambari-metrics-collector stop Delete all AMS Hbase data - In the Ambari Dashboard, under the Ambari Metrics section do a search for the following configuration values "hbase.rootdir" Remove entire files from “hbase.rootdir” Eg. #hdfs dfs -cp /user/ams/hbase/* /tmp/ #hdfs dfs -rm -r -skipTrash /user/ams/hbase/* In the Ambari Dashboard, under the Ambari Metrics section do a search for the following configuration values “hbase.tmp.dir”. Backup the directory and remove the data. Eg. #cp /var/lib/ambari-metrics-collector/hbase-tmp/* /tmp/ #rm –fr /var/lib/ambari-metrics-collector/hbase-tmp/* Remove the znode for hbase in zookeeper cli Login to ambari UI -> Ambari Metrics -> Configs -> Advance ams-hbase-site and search for property “zookeeper.znode.parent” #/usr/hdp/current/zookeeper-client/bin/zkCli.sh #rmr /ams-hbase-secure Start AMS Let know if you still have issue
... View more
11-07-2019
09:44 PM
@Manoj690 Can you check whether authorization has been delegated to Ranger/Kerbe/SQLAuth if you have Ranger plugin for Hive enabled then the authorization has been delegated to Ranger the central authority. You will need to enable the permissions through ranger for all hive database Hive > Configs > Settings > In Security it is set to ?
... View more
09-25-2019
12:14 PM
@Manoj690 Go to Ambari > Hive > CONFIGS > ADVANCED > Custom hive-site and add hive.users.in.admin.role to the list of comma-separated users who require admin role authorization (such as the user hive). Restart the Hive services for the changes to take effect. The permission denied error should be fixed after adding hive .users.in.admin.role=hive and restarting hive because p roperties that are listed in hive.conf.restricted.list cannot be reset with hiveconf Please do that and revert.
... View more
09-23-2019
11:28 PM
Hbase shell command not executed /usr/local/Hbase# ./bin/hbase shell Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.jruby.java.invokers.RubyToJavaInvoker (file:/usr/local/Hbase/lib/jruby-complete-1.6.8.jar) to method java.lang.Object.registerNatives() WARNING: Please consider reporting this to the maintainers of org.jruby.java.invokers.RubyToJavaInvoker WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ArgumentError: wrong number of arguments (0 for 1) method_added at file:/usr/local/Hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:10 method_added at file:/usr/local/Hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:129 Pattern at file:/usr/local/Hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:2 (root) at file:/usr/local/Hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:1 require at org/jruby/RubyKernel.java:1062 (root) at file:/usr/local/Hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:42 (root) at /usr/local/Hbase/bin/../bin/hirb.rb:38
... View more
09-09-2019
01:57 PM
@Manoj690 I think this is a permission issue that should be resolved if you followed the below steps Solution: Go to Ambari > Hive > CONFIGS > ADVANCED > Custom hive-site and add hive.users.in.admin.role to the list of comma-separated users who require admin role authorization (such as the user hive). If it doesn't already exist Restart the stale Hive services for the changes to take effect then retry
... View more
09-09-2019
04:45 AM
How to give write access to the file or folder
... View more
09-06-2019
03:44 AM
LOAD DATA LOCAL INPATH 'empdata.txt' OVERWRITE INTO TABLE emp; Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: Principal [name=hive, type=USER] does not have following privileges for operation LOAD [ADMIN] (state=42000,code=40000)
... View more
09-05-2019
07:51 AM
You can use hdfs fsck / to determine which files are having problems. Look through the output for missing or corrupt blocks (ignore under-replicated blocks for now). This command is really verbose especially on a large HDFS filesystem so I normally get down to the meaningful output with hdfs fsck / | egrep -v '^\.+$' | grep -v eplica which ignores lines with nothing but dots and lines talking about replication. Once you find a file that is corrupt hdfs fsck /path/to/corrupt/file -locations -blocks -files Use that output to determine where blocks might live. If the file is larger than your block size it might have multiple blocks. You can use the reported block numbers to go around to the datanodes and the namenode logs searching for the machine or machines on which the blocks lived. Try looking for filesystem errors on those machines. Missing mount points, datanode not running, file system reformatted/reprovisioned. If you can find a problem in that way and bring the block back online that file will be healthy again. Lather rinse and repeat until all files are healthy or you exhaust all alternatives looking for the blocks. Once you determine what happened and you cannot recover any more blocks, just use the hdfs fs -rm /path/to/file/with/permanently/missing/blocks command to get your HDFS filesystem back to healthy so you can start tracking new errors as they occur.
... View more