Member since
05-07-2019
20
Posts
1
Kudos Received
0
Solutions
02-02-2022
11:33 PM
@er_sharma_shant @jsensharma Can you guys please tell me how to resolve hiveserver2 start issue by adding the znode name for hiveserver2 in zkcli shell?.. I have hiveserver2 instance created in zkcli shell, but it does not have znode name because of which hiveserver2 is failing to start. Welcome to ZooKeeper!
2022-02-03 02:31:36,504 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1013] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2022-02-03 02:31:36,592 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@856] - Socket connection established, initiating session, client: /127.0.0.1:36762, server: localhost/127.0.0.1:2181
2022-02-03 02:31:36,609 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1273] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x27ebd79aecc014c, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[cluster, registry, controller, brokers, storm, infra-solr, zookeeper, hbase-unsecure, hadoop-ha, tracers, admin, isr_change_notification, log_dir_event_notification, accumulo, controller_epoch, hiveserver2, hiveserver2-leader, druid, rmstore, atsv2-hbase-unsecure, consumers, ambari-metrics-cluster, latest_producer_id_block, config]
[zk: localhost:2181(CONNECTED) 1] ls /hiveserver2
[]
[zk: localhost:2181(CONNECTED) 2] Any help would be much appreciated!!!. Thank you
... View more
02-02-2022
11:10 PM
I am facing an error during hiverserver2 start as " caught exception: ZooKeeper node /hiveserver2 is not ready yet and when I debug more I see that there is no hiveserver2 instance in zookeeper. Welcome to ZooKeeper!
2022-02-03 01:58:35,986 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1013] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2022-02-03 01:58:36,072 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@856] - Socket connection established, initiating session, client: /127.0.0.1:59736, server: localhost/127.0.0.1:2181
2022-02-03 01:58:36,093 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1273] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x27ebd79aecc0141, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /hiveserver2
[]
[zk: localhost:2181(CONNECTED) 1]
[zk: localhost:2181(CONNECTED) 2] ls /
[cluster, registry, controller, brokers, storm, infra-solr, zookeeper, hbase-unsecure, hadoop-ha, tracers, admin, isr_change_notification, log_dir_event_notification, accumulo, controller_epoch, hiveserver2, hiveserver2-leader, druid, rmstore, atsv2-hbase-unsecure, consumers, ambari-metrics-cluster, latest_producer_id_block, config] Can someone tell me how to create znode name if hiveserver2 is not registered to znode?.
... View more
- Tags:
- HDP
- Hive
- HiveServer
Labels:
02-02-2022
10:03 PM
@Shelton As you have mentioned solution in Point 3, 3. There seems to be a problem with hiveserver creating a znode in zookeeper. [caught exception: ZooKeeper node /hiveserver2 is not ready yet] How can I create hiveserver2 znode instance in zookeeper if its not created?.
... View more
02-02-2022
08:27 PM
Hi @Shelton , I am facing the same issue as mentioned above while starting hiveserver2. I followed your debug steps and when I listed ls /hiveserver2 in Akali shell, I am getting below response Welcome to ZooKeeper!
2022-02-02 23:11:02,741 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1013] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2022-02-02 23:11:02,834 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@856] - Socket connection established, initiating session, client: /127.0.0.1:58334, server: localhost/127.0.0.1:2181
2022-02-02 23:11:02,851 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1273] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x27ebd79aecc0088, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /hiveserver2
[]
[zk: localhost:2181(CONNECTED) 1] which means I don't have entry hiveserver2 entry in my zookeeper. And when I debug hiveserver2.log under /var/log/hive folder I see below error as permission denied. Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied: user=hive, access=EXECUTE, inode="/tmp/hive"
at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:457)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:604)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1858)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1876) Please help me to resolve this if you have come across this issue anytime. Thanks
... View more
02-02-2022
07:38 PM
@jsensharma I am facing error in starting hiveserver2 in HDP 3.1.5, and when I check hiveserver2 logs under /var/log/hive, it says " Metrics source hiveserver2 already exists! ". 2022-02-02T00:00:44,090 ERROR [main]: metrics2.CodahaleMetrics (:()) - Unable to instantiate using constructor(MetricRegistry, HiveConf) for reporter org.apache.hadoop.hive.common.metrics.metrics2.Metrics2Reporter from conf HIVE_CODAHALE_METRICS_REPORTER_CLASSES
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_112]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_112]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.initCodahaleMetricsReporterClasses(CodahaleMetrics.java:429) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.initReporting(CodahaleMetrics.java:396) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.<init>(CodahaleMetrics.java:196) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_112]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_112]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
at org.apache.hadoop.hive.common.metrics.common.MetricsFactory.init(MetricsFactory.java:42) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:213) ~[hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1087) ~[hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at org.apache.hive.service.server.HiveServer2.access$1700(HiveServer2.java:137) ~[hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1356) ~[hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1200) ~[hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_112]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_112]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_112]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
at org.apache.hadoop.util.RunJar.run(RunJar.java:318) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:232) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source hiveserver2 already exists!
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
at com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.<init>(HadoopMetrics2Reporter.java:206) ~[dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:?]
at com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.<init>(HadoopMetrics2Reporter.java:62) ~[dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:?]
at com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter$Builder.build(HadoopMetrics2Reporter.java:162) ~[dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:?]
at org.apache.hadoop.hive.common.metrics.metrics2.Metrics2Reporter.<init>(Metrics2Reporter.java:45) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
... 23 more
2022-02-02T00:00:44,090 WARN [main]: server.HiveServer2 (HiveServer2.java:init(216)) - Could not initiate the HiveServer2 Metrics system. Metrics may not be reported.
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_112]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_112]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
at org.apache.hadoop.hive.common.metrics.common.MetricsFactory.init(MetricsFactory.java:42) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:213) [hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1087) [hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at org.apache.hive.service.server.HiveServer2.access$1700(HiveServer2.java:137) [hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1356) [hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1200) [hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_112]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_112]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_112]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
at org.apache.hadoop.util.RunJar.run(RunJar.java:318) [hadoop-common-3.1.1.3.1.5.0-152.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:232) [hadoop-common-3.1.1.3.1.5.0-152.jar:?]
Caused by: java.lang.IllegalArgumentException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.initCodahaleMetricsReporterClasses(CodahaleMetrics.java:437) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.initReporting(CodahaleMetrics.java:396) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.<init>(CodahaleMetrics.java:196) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152] Requesting you to help me resolve this.
... View more
04-22-2021
12:27 AM
@jijose If you are using Cloudera Manager, Login to Cloudera Manager UI > Click on "Cluster" > Click "YARN" > Actions > Add Role Instances. You will land on Assign Roles page. Assign the host from where you want to run the job to Gateway role. Save the configuration and deploy the Client configuration. You can try submitting the job from newly added host to Yarn Gateway role. Thanks
... View more
07-14-2020
12:22 AM
Adding a bit of clarification to above mentioned solution. Find "Ranger External URL" in the field Ranger > Configs > Advanced > Ranger Settings. It will be something like "http://<ranger_admin_host>:6080" . Copy this URL and update the particular service to which Ranger plugin is enabled. For example : If its HDFS, HDFS > Configs > Advanced > Advanced ranger-hdfs-security > ranger. plugin. hdfs. policy. rest. url. Usually this field should be auto populated by Ranger External URL value. If its not, it will be like "{{policy_mgr_url}}" . Update this field by adding "Ranger External URL". Restart Ranger service, ranger KMS service and all required services.
... View more
05-31-2020
11:57 PM
@ccibi75Thanks for the solution to resolve timeline server v2.0 start issue in HDP3.x. It worked!!!
... View more
05-07-2019
12:17 AM
Hi, Thanks for the resolution. Is there any specific procedures to upgrade krb packages to 1.15.1-19?.. Detailed steps to upgrade and post upgrade to start hadoop cluster would be more helpful
... View more
03-14-2019
07:51 AM
do sudo yum install -y mysql-connector-java to install mysql-connector-java.jar Check the path where mysql-connector-java is installed. [root@c902f08x05 ~]# rpm -ql mysql-connector-java-* /usr/share/doc/mysql-connector-java-5.1.25 /usr/share/doc/mysql-connector-java-5.1.25/CHANGES /usr/share/doc/mysql-connector-java-5.1.25/COPYING /usr/share/doc/mysql-connector-java-5.1.25/docs /usr/share/doc/mysql-connector-java-5.1.25/docs/README.txt /usr/share/doc/mysql-connector-java-5.1.25/docs/connector-j.html /usr/share/doc/mysql-connector-java-5.1.25/docs/connector-j.pdf /usr/share/java/mysql-connector-java.jar /usr/share/maven-fragments/mysql-connector-java /usr/share/maven-poms/JPP-mysql-connector-java.pom [root@c902f08x05 ~]# check the path and run ambari-server setup with jdbc driver path ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar and re-try hive-client install,.it should work.
... View more
03-13-2019
09:46 PM
@Jay Kumar SenSharma [root@c902f10x09 ~]# grep 'database_name' /etc/ambari-server/conf/ambari.properties
server.jdbc.database_name=ambari [root@c902f10x09 ~]# grep 'schema' /etc/ambari-server/conf/ambari.properties
server.jdbc.postgres.schema=ambari [root@c902f10x09 ~]# grep 'jdbc' /etc/ambari-server/conf/ambari.properties
custom.mysql.jdbc.name=mysql-jdbc-driver.jar
previous.custom.mysql.jdbc.name=mysql-connector-java.jar
server.jdbc.connection-pool=internal
server.jdbc.database=postgres
server.jdbc.database_name=ambari
server.jdbc.driver=org.postgresql.Driver
server.jdbc.hostname=localhost
server.jdbc.port=5432
server.jdbc.postgres.schema=ambari
server.jdbc.rca.driver=org.postgresql.Driver
server.jdbc.rca.url=jdbc:postgresql://c902f10x09.gpfs.net:5432/ambari
server.jdbc.rca.user.name=ambari
server.jdbc.rca.user.passwd=/etc/ambari-server/conf/password.dat
server.jdbc.url=jdbc:postgresql://c902f10x09.gpfs.net:5432/ambari
server.jdbc.user.name=ambari
server.jdbc.user.passwd=/etc/ambari-server/conf/password.dat For your reference.
... View more
03-13-2019
10:03 AM
@Jay Kumar SenSharma It tried adding "ambari." to query and it works. ambari=# update ambari.repo_version set stack_id = (select stack_id from ambari.stack where stack_version = '2.3') where display_name like 'BigInsights%'; UPDATE 1 ambari=#
... View more
03-13-2019
05:28 AM
Hi, I am trying to run below SQL query in ambari database to update one of the configuration, but query is failing with following error. I am trying to run query in ambari database. [root@c902f10x09 ~]# su - postgres
Last login: Tue Mar 12 22:10:31 EDT 2019 on pts/0
JAVA at /usr/jdk64/jdk1.8.0_112
-bash-4.2$ /usr/pgsql-9.6/bin/psql
psql (9.6.12) Type "help" for help.
postgres=# \connect ambari
You are now connected to database "ambari" as user "postgres".
ambari=# update repo_version set stack_id = (select stack_id from stack where stack_version = '2.3') where display_name like 'BigInsights%'; ERROR: relation "repo_version" does not exist
LINE 1: update repo_version set stack_id = (select stack_id from sta...
^
ambari=#
When I try to run "update repo_version set stack_id = (select stack_id from stack where stack_version = '2.3') where display_name like 'BigInsights%';" I am getting error as "ERROR: relation "repo_version" does not exist LINE 1: update repo_version set stack_id = (select stack_id from sta..." How to resolve this?. @Aditya Sirna, any thoughts?.
... View more
- Tags:
- upgrade
03-11-2019
02:14 PM
Hi I am performing express upgrade from BigInsights 4.2.5 to HDP 2.6.4 and express upgrade pre-check, I am seeing critical warning " CRITICAL: Ambari Agent Distro/Conf Select Versions: on all my hosts. I did verify in "/usr/hdp" of my hosts and I see files only of HDP 2.6.4.0-91 version [root@c902f10x09 hdp]# pwd
/usr/hdp
[root@c902f10x09 hdp]# ls -la
total 8
drwxr-xr-x 4 root root 39 Mar 10 22:21 .
drwxr-xr-x. 18 root root 213 Mar 10 22:20 ..
drwxr-xr-x 38 root root 4096 Mar 10 22:27 2.6.4.0-91
drwxr-xr-x 2 root root 4096 Mar 10 22:27 current
[root@c902f10x09 hdp]# How can I resolve this issue?..
... View more
03-11-2019
02:14 PM
Hi I am performing express upgrade from BigInsights4.2.5 to HDP2.6.4 and during express upgrade I am seeing CRITICAL: Ambari Agent Distro/Conf Select Versions on all the hosts. I verified in /usr/hdp folder of all hosts and I see "2.6.4.0-91" version on all the hosts. [root@node1 hdp]# pwd/usr/hdp[root@node1 hdp]# ls -latotal 8drwxr-xr-x 4 root root 39 Mar 10 22:12 .drwxr-xr-x. 18 root root 213 Mar 10 22:12 ..drwxr-xr-x 34 root root 4096 Mar 10 22:18 2.6.4.0-91drwxr-xr-x 2 root root 4096 Mar 10 22:18 current[root@node1 hdp]# hdp-select versions also returns single version. [root@node1 hdp]# hdp-select versions
2.6.4.0-91
[root@node1 hdp]# How this issue can be resolved to resolve the critical warnings during express upgrade?..
... View more
03-11-2019
02:14 PM
Hi I am performing express upgrade from BigInsights4.2.5 to HDP2.6.4 and during express upgrade I am seeing CRITICAL: Ambari Agent Distro/Conf Select Versionson on all the hosts. I verified in /usr/hdp folder of all hosts and I see "2.6.4.0-91" version on all the hosts. [root@c902f10x11 hdp]# pwd
/usr/hdp
[root@c902f10x11 hdp]# ls -la
total 8
drwxr-xr-x 4 root root 39 Mar 10 22:12 .
drwxr-xr-x. 18 root root 213 Mar 10 22:12 ..
drwxr-xr-x 34 root root 4096 Mar 10 22:18 2.6.4.0-91
drwxr-xr-x 2 root root 4096 Mar 10 22:18 current
[root@c902f10x11 hdp]# How this issue can be resolved?.. Thanks
... View more
01-22-2019
10:01 AM
Hi, In order to find the adequate space in any directory during installation or upgrade procedures, for example while doing HDP upgrade you should verify about the availability of adequate space on /usr/hdp for the target HDP version. For that use below format: df -h <Path_of_interest> Example : [alex@machine1]# df -h /usr/hdp/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/system-root 528G 22G 506G 5% / [alex@machine1]# You can all parameters like Size of disk, used space, available spave and percentage of usage.
... View more
07-12-2018
01:36 PM
@Aditya : Thank, this is really informative
... View more
03-27-2018
12:14 AM
1 Kudo
Back up all the data that resides under <File_system_mount
point>/apps/accumulo/data directory. Remove the <File_system_mount point>/apps/accumulo/data directory by running
sudo -u hdfs hdfs dfs -rm -R
/apps/accumulo/data Reinitialize the Accumulo service, run sudo -u accumulo
ACCUMULO_CONF_DIR=/etc/accumulo/conf/server accumulo init --instance-name hdp-accumulo-instance
--clear-instance-name. Enter a valid password when asked for initial password for
root. Update the Accumulo root password and trace user password from the Ambari GUI. Set it to the
same password as provided in the previous step. Restart Accumulo. After the Accumulo service gets started successfully and the service check passes, you can
restore the data.
... View more
12-07-2017
06:03 AM
This works absolutely fine. Thanks for workaround.
... View more