Member since
09-28-2015
22
Posts
23
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1322 | 07-08-2016 11:09 AM | |
2529 | 06-29-2016 03:47 PM | |
1444 | 04-07-2016 05:12 PM | |
2460 | 03-08-2016 02:20 PM | |
6451 | 03-04-2016 01:00 PM |
07-16-2018
09:42 AM
1 Kudo
Hi Satish, You are running out of disk space on which ever node this job runs on. You can check from the ResourceManager UI to track it down. Check your nodes disk space & ensure that the temporary locations for hive, (& potentially YARN) have enough space to process the spill. Dave
... View more
10-24-2016
03:03 PM
Ok this is due to NN not being active. Did you start your NameNode as user HDFS? What are the permissions on the journal node directories? AccessControlException: Access denied for user root. Superuser privilege is required 2016-10-24 11:26:12,138 WARN namenode.FSEditLog (JournalSet.java:selectInputStreams(280)) - Unable to determine input streams from QJM to [192 .168.1.161:8485, 192.168.1.162:8485, 192.168.1.163:8485]. Can you make sure your journalnodes are runnign as hdfs and also your namenodes?
... View more
10-24-2016
01:54 PM
1 Kudo
Hi @Jessika314_ninja, It looks like the NameNodes cannot contact the Journal Nodes - can you check the JNs are running. You say the zkfc stops after starting it - can you share some of the last 2-500 lines of log from it? Thanks Dave
... View more
10-21-2016
07:28 AM
Hi Arun, You should stick to the components which are shipped with the whole HDP stack. These are tested and certified together. There is no way in Ambari to only upgrade 1 component - specifically for this reason. If you feel you require this version of HBase then you should consider upgrading your stack to HDP 2.5 Thanks Dave
... View more
10-19-2016
06:16 PM
When running the YARN service check in a Resource Manager HA environment you see that the service check fails - all other functionality is working correctly (restart of services, running jobs etc) When you run the service check you see: stderr: /var/lib/ambari-agent/data/errors-392.txt
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py", line 159, in <module>
ServiceCheck().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py", line 130, in service_check
info_app_url = params.scheme + "://" + rm_webapp_address + "/ws/v1/cluster/apps/" + application_name
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'yarn.resourcemanager.webapp.address.' was not found in configurations dictionary! Notice the . at the end of the parameter - yarn.resourcemanager.webapp.address. This is a result of having: yarn.resourcemanager.ha.rm-ids set to rm1,rm2, (notice the comma at the end) This leads to the scripts in ambari putting these values into an array for checking where {{rm_alias}} is set for rm1, rm2 and then a blank value. To fix this issue you must remove the trailing , in the configuration value for this property. After removing this and restarting YARN, the service check will pass
... View more
Labels:
07-21-2016
03:01 PM
2 Kudos
SYMPTOM: When you create an Oozie workflow which contains a SSH function, it can fail with an error of "Not able to perform operation [ssh -o PasswordAuthentication=no -o KbdInteractiveDevices=no -o StrictHostKeyChecking=no -o ConnectTimeout=20 root@localhost mkdir -p oozie-oozi/0000009-131023115150605-oozie-oozi-W/Ssh--ssh/ ] | ErrorStream: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password)."
ROOT CAUSE: Passwordless SSH must be setup correctly from oozie@server1 to $user@server2 For example:
RESOLUTION: If an Oozie workflow contains an ssh command from server 1 to server 2 as root, then the passwordless SSH must be setup as the following:
oozie@server1 > root@server2
This article created by Hortonworks Support (Article: 000001722) on 2015-04-01 11:05 OS: Linux Type: Configuration,Executing_Jobs
... View more
Labels:
07-08-2016
11:09 AM
Hi Roberto, You have an invalid configuration character in line 1074 in your hue.ini Thanks Dave
... View more
06-29-2016
03:47 PM
1 Kudo
Hi Anthony, You should modify /etc/hue/conf/hue.ini - please ensure there is only a log.conf & hue.ini in this location. This is because any ini files in here will be read in one by one ending in an overwriting of values. To check your most current configuration you can use: /usr/lib/hue/build/env/bin/hue config_dump Thanks Dave
... View more
05-19-2016
02:32 PM
Do you have your oozie-site to hand and also the configuration tab in Hue from the screenshot you show the workflow.xml? , The sharelib looks fine, what does your hue.ini file look like, and if you go to the configuration tab in oozie (when you showed the screenshot of the XML)
... View more
05-17-2016
02:33 PM
Hi, What is the output of: oozie admin -oozie http://localhost:11000/oozie -shareliblist
You should also be able to see :- hadoop fs -ls /user/oozie/share/lib/lib_<timestamp>/ What is hue setup to use in your job.properties & workflow.xml? Thanks Dave
... View more