Member since
01-04-2016
409
Posts
313
Kudos Received
35
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 6991 | 01-16-2018 07:00 AM | |
| 2614 | 09-13-2017 06:17 PM | |
| 4966 | 09-13-2017 05:58 AM | |
| 3171 | 08-28-2017 07:16 AM | |
| 4737 | 05-11-2017 11:30 AM |
10-26-2016
10:18 AM
@Smart Soultions output of id hdfs should be [root@hdp-2 ~]# id hdfs uid=9022(hdfs) gid=501(hadoop) groups=501(hadoop),9014(hdfs) In your output there is no group like hdfs Create hdfs goup and add hdfs user in hdfs group. And try to restart namenode. groupadd group1 usermod -a -G grpup1 abc In this example I created group group1 adn addedd the user abc to group1.
... View more
10-18-2016
06:10 AM
I verified property. It's is right one, but dont know why this error is getting.
... View more
10-18-2016
05:26 AM
1 Kudo
HI am getting following error while restarting the namenode:- Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 408, in <module>
NameNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 530, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 103, in start
upgrade_suspended=params.upgrade_suspended, env=env)
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 212, in namenode
create_hdfs_directories(is_active_namenode_cmd)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 278, in create_hdfs_directories
only_if=check
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 463, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 460, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 246, in action_delayed
main_resource.resource.security_enabled, main_resource.resource.logoutput)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 133, in __init__
security_enabled, run_user)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/namenode_ha_utils.py", line 167, in get_property_for_active_namenode
if INADDR_ANY in value and rpc_key in hdfs_site:
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 81, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'dfs.namenode.https-address' was not found in configurations dictionary!
Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hadoop
10-13-2016
10:48 AM
1 Kudo
Thanks got the solution. Done following steps:- Copied the mysql-connector.jar and issue is resolved cp -r /usr/lib/hive/lib/mysql-connector-java.jar /usr/share/java/
... View more
10-13-2016
10:32 AM
1 Kudo
I am using HDP 2.1. While restarting hive I am getting following error:- 2016-10-13 10:31:25,465 - Error while executing command 'restart':
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 123, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 233, in restart
self.start(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py", line 45, in start
action = 'start'
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_service.py", line 64, in hive_service
path='/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin', tries=5, try_sleep=10)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 241, in action_run
raise ex
Fail: Execution of '/usr/jdk64/jdk1.6.0_31/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/share/java/mysql-connector-java.jar org.apache.ambari.server.DBConnectionVerification 'jdbc:mysql://test.xxx.com/hive?createDatabaseIfNotExist=true' hive [PROTECTED] com.mysql.jdbc.Driver' returned 1. ERROR: Unable to connect to the DB. Please check DB connection properties. java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
... View more
Labels:
- Labels:
-
Apache Hive
10-12-2016
07:55 AM
1 Kudo
@Sanjiv Kumar Try to do this. 1) 64-bit guests need hardware-virtualzation enabled in the hosts bios. Enable Intel VT-x/AMD-V from BIOS. 2) when creating the 64-bit guest you must select 64-bit in General -> Basic -> version.
... View more
10-10-2016
08:13 PM
1 Kudo
@Edgar This is kind of bug refer the link:- https://issues.apache.org/jira/browse/AMBARI-17622
... View more
10-10-2016
08:04 PM
3 Kudos
@Kumar Veerappan If you taking only one data node down then you don't need to take down time, because its not going to harm
data on that node falls under under replicated category
framework will take care of replicating them back
when node comes back, it'll get new data
as an end user, there shouldn't be any issue. If you take down time for whole cluster then server anything
missing data error
until data nodes re-registers back.
... View more
10-10-2016
07:53 PM
2 Kudos
@ Smart Solution Please refer the link if this helps you :- https://community.hortonworks.com/articles/43525/disaster-recovery-and-backup-best-practices-in-a-t.html
... View more
10-07-2016
05:54 PM
@Daniel Buraimo Please provide the following question's answer 1)Is there any firewall is running? 2) Provide me sestatus 3) Can you verify you hostname. Provide me hostname -f output. And also please check you /etc/hosts file updated with fully qualified domain name.
... View more