Member since
03-29-2016
20
Posts
2
Kudos Received
0
Solutions
10-27-2016
04:45 PM
On again, off again... I managed to fix my mysql install: CREATE USER 'hive'@'%' IDENTIFIED BY 'passwd';
GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'%';
... View more
10-27-2016
03:30 PM
Ok, this is unbelievably maddening - hive was only working because it somehow reverted back to postgres datastore, I only discovered this now. When I switched it back to use mysql things started failing again! Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 245, in <module>
HiveMetastore().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 530, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 58, in start
self.configure(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 72, in configure
hive(name = 'metastore')
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 296, in hive
user = params.hive_user
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]' returned 1. WARNING: Use "yarn jar" to launch YARN applications.
Metastore connection URL: jdbc:mysql://bu-hdp2-sn.lss.emc.com/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
*** schemaTool failed *** I tried to fix things with the schema tool, it won't let me: [root@bu-hdp2-nn /]# /usr/hdp/current/hive-metastore/bin/schematool -dbType mysql -initSchema
WARNING: Use "yarn jar" to launch YARN applications.
Metastore connection URL: jdbc:mysql://bu-hdp2-sn.lss.emc.com/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
*** schemaTool failed ***
... View more
10-25-2016
01:36 PM
I assume the one bit below is a typo? Anyway, I was eventually able to get it going after some more fiddling and restarts with ambari. I also uninstalled and reinstalled hive. Thanks for your suggestions! mysql -u root -hive
... View more
10-24-2016
03:10 PM
Thanks for the fast response! Unfortunately, it didn't help: [root@bu-hdp2-nn /]# ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jarUsing python /usr/bin/pythonSetup ambari-serverCopying /usr/share/java/mysql-connector-java.jar to /var/lib/ambari-server/resourcesJDBC driver was successfully initialized.Ambari Server 'setup' completed successfully.
[root@bu-hdp2-nn /]# tail -20 /var/log/ambari-server/ambari-server.log
24 Oct 2016 09:43:30,369 INFO [qtp-ambari-client-28805] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,541 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,543 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,545 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,548 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,550 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,553 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,555 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,558 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,561 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,563 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,565 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,566 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,568 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,570 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,572 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,573 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,575 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,577 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com
24 Oct 2016 09:43:36,579 INFO [qtp-ambari-client-22] AbstractProviderModule:354 - Metrics Collector Host or host component not live : bu-hdp2-sn.lss.emc.com Do I need to enable this service somehow to test the database connection when setting up hive? Test connection still fails with exactly the same error.
... View more
10-24-2016
02:41 PM
I ran the setup script, which apparently was ok. ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar
Using python /usr/bin/python
Setup ambari-server
Copying /usr/share/java/mysql-connector-java.jar to /var/lib/ambari-server/resources
JDBC driver was successfully initialized.
Ambari Server 'setup' completed successfully. mysql has been set up with a hive account using the recommended approach: # mysql -u root -p CREATE USER ‘<HIVEUSER>’@’localhost’ IDENTIFIED BY ‘<HIVEPASSWORD>’; GRANT ALL PRIVILEGES ON *.* TO '<HIVEUSER>'@'localhost'; CREATE USER ‘<HIVEUSER>’@’%’ IDENTIFIED BY ‘<HIVEPASSWORD>’; GRANT ALL PRIVILEGES ON *.* TO '<HIVEUSER>'@'%'; CREATE USER '<HIVEUSER>'@'<HIVEMETASTOREFQDN>'IDENTIFIED BY '<HIVEPASSWORD>'; GRANT ALL PRIVILEGES ON *.* TO '<HIVEUSER>'@'<HIVEMETASTOREFQDN>'; FLUSH PRIVILEGES; But when I go to add the hive service from ambari (I set the metastore to be on the secondary namenode) and test the connection: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 477, in <module>
CheckHost().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 212, in actionexecute
raise Fail(error_message)
resource_management.core.exceptions.Fail: Check db_connection_check was unsuccessful. Exit code: 1. Message: Error: Ambari Server cannot download the database JDBC driver and is unable to test the database connection. You must run ambari-server setup --jdbc-db=mysql --jdbc-driver=/path/to/your/mysql/driver.jar on the Ambari Server host to make the JDBC driver available for download and to enable testing the database connection.
HTTP Error 404: Not Found
stdout:
DB connection check started.
WARNING: File /var/lib/ambari-agent/cache/DBConnectionVerification.jar already exists, assuming it was downloaded before
Error: Ambari Server cannot download the database JDBC driver and is unable to test the database connection. You must run ambari-server setup --jdbc-db=mysql --jdbc-driver=/path/to/your/mysql/driver.jar on the Ambari Server host to make the JDBC driver available for download and to enable testing the database connection.
HTTP Error 404: Not Found I entered in the correct password at the prompt for the hive account I created on mysql, but I keep getting this error. What step could I be missing?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
07-28-2016
12:35 PM
Interesting, I was seeing significant speedups even with maps running symmetrical data nodes. So the only downside is the initial setup time is greater with dynamic strategy.
... View more
07-27-2016
08:09 PM
We did some preliminary tests and it seems distcp with -strategy dynamic improves performance by a substation amount on our workload. Digging through what documentation I can find, it does say that it improves performance with MOST workloads, but I can't find any clear guidance on what workloads it would perform poorly with. 1. If it is so much better in most situations, why isn't -strategy dynamic the Hadoop default? 2. What are the potential downsides to using it by default? Is there any use case where -strategy uniform would perform better?
... View more
Labels:
- Labels:
-
Apache Hadoop
04-22-2016
07:23 PM
Actually, I checked that ExportSnapshot includes the Tools interface, hence it does support -files and -libjars so I think the approach is still good. Sorry, false panic.
... View more
04-21-2016
06:53 PM
Sorry, one related problem, can ExportSnapshot be passed an external
.jar and
library files using -files and -libjars to support the external
filesystem as can be done with DistCp?If not, is there any other way to
make ExportSnapshot work if the DistCp operation needs an external .jar
and libraries?
... View more
03-29-2016
09:47 PM
1 Kudo
I would like to export HBase snapshot data to a backup appliance outside of hdfs for disaster recovery. I can use ExportSnapshot to export the table to a non hdfs URI, no problem. But there is no ImportSnapshot - if I try to use ExportSnapshot to bring it back from the external source outside the cluster, using --copy-from <Backup-appliance-URI>/hbase and --copy-to /hbase complains about an unexpected (non hdfs) URI. If I use DistCp to copy it to a staging area on the hdfs cluster first then use ExportSnapshot to bring it back it works but is undesirable because I need to create a temporary staging area. I found I was able to use DistCp directly on the exported snapshot and copy it back to hdfs://hbase and restore the table. This looks like a good solution and seems to work, but are there any unexpected problems I can run into? Is this a recommended approach?
... View more
Labels:
- Labels:
-
Apache HBase
-
HDFS
- « Previous
-
- 1
- 2
- Next »