Member since
12-11-2015
213
Posts
87
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3222 | 12-20-2016 03:27 PM | |
12814 | 07-26-2016 06:38 PM |
12-20-2016
02:52 PM
1 Kudo
Datanode is not staying up on any node of the cluster. I have a seven node cluster with 4 datanodes. What's going on ? Below is what I see when I perform a HDFS check Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/service_check.py", line 146, in <module>
HdfsServiceCheck().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/service_check.py", line 67, in service_check
action="create_on_execute"
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 402, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 399, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 255, in action_delayed
self._create_resource()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 269, in _create_resource
self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 322, in _create_file
self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 210, in run_command
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X PUT -T /etc/passwd 'http://hdp-m.asotc:50070/webhdfs/v1/tmp/id000a1902_date422016?op=CREATE&user.name=hdfs&overwrite=True'' returned status_code=403.
{
"RemoteException": {
"exception": "IOException",
"javaClassName": "java.io.IOException",
"message": "Failed to find datanode, suggest to check cluster health."
}
}
... View more
Labels:
- Labels:
-
Apache Hadoop
07-27-2016
05:06 PM
Using LDAP is the plan but making it work doesnt seem to be simple. Thanks pP
... View more
07-27-2016
02:39 AM
How about web interface, AMbari and HUE. Can user accounts on Ambari and Hue gets sync with Linux account. Also why do I need Linux account on all members of the cluster. I can just give access to one machine which can be used as EdgeNode. Thanks
... View more
07-27-2016
02:12 AM
Trying to explore the best way to manage users in hadoop ecosystem. Basically I am going to provide 3 user interface to user community: a.) EdgeNode - Linux machine. This is where users can use their Linux credentials and use command line to use hadoop clients (spark, sqoop, hdfs etc) b.) Ambari web interface c.) HUE interface d.) Ranger - For Admins to control file and folder permissions Question I have is is it possible to create account in Linux environment and let all other pull it from there and use the same credentials. I read about LDAP but it appears to be difficult and we don't currently have a working LDAP. How can I centrally manage users without using LDAP ? Thanks Prakash
... View more
Labels:
07-26-2016
06:38 PM
@Constantin Stanca & @Sindhu Thanks guys problem solved. Actually hue_original.ini file was causing the problem. I made this copy to save the original comfiguration but HUE when it gets restarted, it reads all the INI files for configuration. I changed that file to .TXT and now HUE passes all the configuration. thanks
... View more
07-26-2016
06:01 PM
@Sindhu hue_original.ini is the original copy of hue.ini which I made before modifying hue.ini with hadoop cluster information.
... View more
07-26-2016
06:00 PM
@Constantin Stanca I uploaded the Hadoop cluster configuration section from HUE.INI. please take a look.
... View more
07-25-2016
05:50 PM
Thank you @Constantin Stanca @Neeraj Sabharwal I manually made the change in HUE.INI to point to correct hiveserver2 and namenode/resource manager, also restarted HUE multiple times. Please take a look at the ownership of HUE.INI file (root/root). I am using root account Take a look at the HUE.INI which has entry for webhdfs_URL. it;s pointing to correct namenode Also check the screenshot of the error I am getting. Thanks Prakash [root@EdgeNode conf]# ls -l
total 48
-rwxr-xr-x. 1 root root 1785 Apr 20 16:12 hue_httpd.conf
-rwxr-xr-x. 1 root root 16999 Jul 22 14:03 hue.ini
-rwxr-xr-x. 1 root root 16845 Jul 22 12:51 hue_original.ini
-rwxr-xr-x. 1 root root 1984 Apr 20 16:12 log.conf
# Enter the filesystem uri
fs_defaultfs=hdfs://namenode.asotc.com:8020
# Use WebHdfs/HttpFs as the communication mechanism. To fallback to
# using the Thrift plugin (used in Hue 1.x), this must be uncommented
# and explicitly set to the empty value.
webhdfs_url=http://namenode.asotc.com:50070/webhdfs/v1/
## security_enabled=true
... View more
07-22-2016
06:17 PM
Successfully installed HUE and modified all the parameter in HUE.INI such as webhdfs_url and set the parameter in HDFS configuration (proxyuser) and enabled webhdfs, But HUE is not able to connect to HDFS file system. It's trying at the URL: http://localhost:50070/webhdfs/v1 BUT correct hostname is listed in hue.ini.
it doesnt seem to read the configuration from hue.ini. INI. below is what I am getting from HUE web interface.
Potential misconfiguration detected. Fix and restart Hue.
hadoop.hdfs_clusters.default.webhdfs_url Current value: http://localhost:50070/webhdfs/v1/
Failed to access filesystem root
Resource Manager Failed to contact Resource Manager at http://localhost:8088/ws/v1: HTTPConnectionPool(host='localhost', port=8088): Max retries exceeded with url: /ws/v1/cluster/apps (Caused by : [Errno 111] Connection refused)
Beeswax (Hive UI) The application won't work without a running HiveServer2.
hcatalog.templeton_url Current value: http://localhost:50111/templeton/v1/
HTTPConnectionPool(host='localhost', port=50111): Max retries exceeded with url: /templeton/v1/status?user.name=hue&doAs=hue (Caused by : [Errno 111] Connection refused)
Oozie Editor/Dashboard The app won't work without a running Oozie server
... View more
Labels:
- Labels:
-
Cloudera Hue