Member since
09-28-2015
22
Posts
23
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
248 | 07-08-2016 11:09 AM | |
717 | 06-29-2016 03:47 PM | |
224 | 04-07-2016 05:12 PM | |
850 | 03-08-2016 02:20 PM | |
2224 | 03-04-2016 01:00 PM |
07-16-2018
09:42 AM
1 Kudo
Hi Satish, You are running out of disk space on which ever node this job runs on. You can check from the ResourceManager UI to track it down. Check your nodes disk space & ensure that the temporary locations for hive, (& potentially YARN) have enough space to process the spill. Dave
... View more
10-24-2016
03:03 PM
Ok this is due to NN not being active. Did you start your NameNode as user HDFS? What are the permissions on the journal node directories? AccessControlException: Access denied for user root. Superuser privilege is required 2016-10-24 11:26:12,138 WARN namenode.FSEditLog (JournalSet.java:selectInputStreams(280)) - Unable to determine input streams from QJM to [192 .168.1.161:8485, 192.168.1.162:8485, 192.168.1.163:8485]. Can you make sure your journalnodes are runnign as hdfs and also your namenodes?
... View more
10-24-2016
01:54 PM
1 Kudo
Hi @Jessika314_ninja, It looks like the NameNodes cannot contact the Journal Nodes - can you check the JNs are running. You say the zkfc stops after starting it - can you share some of the last 2-500 lines of log from it? Thanks Dave
... View more
10-21-2016
08:03 AM
Hi @Avijeet Dash Are you using Ambari? If so, you can register your new version of 2.5. You will then be prompted to install this version on all your hosts (this only installs the binaries) After this, you will have the option of express (downtime) or rolling (as long as you have HA for the components - no downtime). Express is much quicker but the cluster will not be usable during this time. Once you complete the upgrade there are a few post upgrade steps to be followed here: http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-upgrade/content/upgrading_HDP_post_upgrade_tasks_ranger_kerberos.html Please also ensure you follow the upgrade planning to ensure OS, JDK etc are supported and you have enough disk space on your nodes. http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-upgrade/content/upgrading_HDP_prerequisites.html Thanks Dave
... View more
10-21-2016
07:30 AM
Hi @Mohit Kapoor Can you share the log and out files here? Did you upgrade ambari-server at any point? As it seems to suggest that you have a conflicting jar file on your system. Is AMS installed on this system, what about an Ambari agent? Thanks Dave
... View more
10-21-2016
07:28 AM
Hi Arun, You should stick to the components which are shipped with the whole HDP stack. These are tested and certified together. There is no way in Ambari to only upgrade 1 component - specifically for this reason. If you feel you require this version of HBase then you should consider upgrading your stack to HDP 2.5 Thanks Dave
... View more
10-19-2016
06:16 PM
When running the YARN service check in a Resource Manager HA environment you see that the service check fails - all other functionality is working correctly (restart of services, running jobs etc) When you run the service check you see: stderr: /var/lib/ambari-agent/data/errors-392.txt
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py", line 159, in <module>
ServiceCheck().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py", line 130, in service_check
info_app_url = params.scheme + "://" + rm_webapp_address + "/ws/v1/cluster/apps/" + application_name
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'yarn.resourcemanager.webapp.address.' was not found in configurations dictionary! Notice the . at the end of the parameter - yarn.resourcemanager.webapp.address. This is a result of having: yarn.resourcemanager.ha.rm-ids set to rm1,rm2, (notice the comma at the end) This leads to the scripts in ambari putting these values into an array for checking where {{rm_alias}} is set for rm1, rm2 and then a blank value. To fix this issue you must remove the trailing , in the configuration value for this property. After removing this and restarting YARN, the service check will pass
... View more
- Find more articles tagged with:
- Ambari
- Hadoop Core
- Issue Resolution
- service
- YARN
Labels:
07-21-2016
03:01 PM
2 Kudos
SYMPTOM: When you create an Oozie workflow which contains a SSH function, it can fail with an error of "Not able to perform operation [ssh -o PasswordAuthentication=no -o KbdInteractiveDevices=no -o StrictHostKeyChecking=no -o ConnectTimeout=20 root@localhost mkdir -p oozie-oozi/0000009-131023115150605-oozie-oozi-W/Ssh--ssh/ ] | ErrorStream: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password)."
ROOT CAUSE: Passwordless SSH must be setup correctly from oozie@server1 to $user@server2 For example:
RESOLUTION: If an Oozie workflow contains an ssh command from server 1 to server 2 as root, then the passwordless SSH must be setup as the following:
oozie@server1 > root@server2
This article created by Hortonworks Support (Article: 000001722) on 2015-04-01 11:05 OS: Linux Type: Configuration,Executing_Jobs
... View more
- Find more articles tagged with:
- Governance & Lifecycle
- issue-resolution
- Oozie
Labels:
07-08-2016
11:09 AM
Hi Roberto, You have an invalid configuration character in line 1074 in your hue.ini Thanks Dave
... View more
06-29-2016
03:47 PM
1 Kudo
Hi Anthony, You should modify /etc/hue/conf/hue.ini - please ensure there is only a log.conf & hue.ini in this location. This is because any ini files in here will be read in one by one ending in an overwriting of values. To check your most current configuration you can use: /usr/lib/hue/build/env/bin/hue config_dump Thanks Dave
... View more
06-06-2016
11:02 AM
Hi Tim You can use Ambari Views if you are using our sandbox? Or you can just log into Hue if you are using a sandbox of version HDP 2.3 or earlier. You should still be able to install & configure Hue for the interface but we would prefer you to use Ambari Views. Let me know if you have any questions, Dave
... View more
05-19-2016
02:32 PM
Do you have your oozie-site to hand and also the configuration tab in Hue from the screenshot you show the workflow.xml? , The sharelib looks fine, what does your hue.ini file look like, and if you go to the configuration tab in oozie (when you showed the screenshot of the XML)
... View more
05-17-2016
02:33 PM
Hi, What is the output of: oozie admin -oozie http://localhost:11000/oozie -shareliblist
You should also be able to see :- hadoop fs -ls /user/oozie/share/lib/lib_<timestamp>/ What is hue setup to use in your job.properties & workflow.xml? Thanks Dave
... View more
05-05-2016
10:24 AM
Hi Amira, What database are you using for Hue - is it sqllite or MySQL? It looks like something is not setup correctly here - maybe you can run /usr/lib/hue/build/env/bin/hue syncdb and see if the issue persists. Also change the hue logging to debug found in /etc/hue/conf/log.conf (iirc) and check in /var/log/hue/ when you hit the URL. To me it looks like something on the DB side but it's difficult to say from the screenshot. Thanks Dave
... View more
04-07-2016
05:12 PM
3 Kudos
Hi Prakash, Hue 3.8 is not supported on HDP platforms but you can install it. To upgrade you should just follow the normal upgrade instructions (Hortonworks do not have these as I said earlier - this is not supported) however please ensure you backup the Hue database first. Thanks Dave
... View more
04-06-2016
03:39 PM
Hi Hari, On the falcon & oozie server machine can you try and run yum install oozie* falcon* If they are already installed - then we can look at removing them and re-installing them. Let me know how you get on Dave
... View more
03-08-2016
02:20 PM
2 Kudos
Hi Nilesh, Usually in your Hue.ini you should have a webhdfs address. This should be listening on port 50070 and reachable from the hue machine. Can you telnet from the hue machine to this server on port 50070? Also check your NameNode has webhdfs enabled (this should be set by default) Thanks Dave
... View more
03-04-2016
01:00 PM
2 Kudos
Hi Nirvana, If you are using Ambari, then you can drill into hive. You will see the components for Hive here > Metastore, Hiveserver2, Webhcat. You can click on Hiveserver2 and this will take you to the hosts page for the host running hiveserver2. Otherwise you can check in zookeeper. Open the zookeeper cli (/usr/hdp/current/zookeeper-server/bin/zkCli.sh) and run ls /hiveserver2 - this will output something along the lines of: [zk: localhost:2181(CONNECTED) 1] ls /hiveserver2
[serverUri=sandbox.hortonworks.com:10000;version=1.2.1.2.3.2.0-2950;sequence=0000000023] However HS2 must be running. Thanks Dave
... View more
02-25-2016
03:17 PM
Has anyone got this working from Linux to Windows?
... View more
11-23-2015
04:28 PM
At this time it's good to upgrade South if you are not already running 0.8 - as databases created in MySQL with 0.7 can have issues with upgrades -- do the following before the above. # su - hue # cd /usr/lib/hue # source ./build/env/bin/activate # pip install --upgrade South==0.8.2 # deactivate
... View more
11-23-2015
04:14 PM
Run the following commands: cd /usr/lib/hue/tools chmod 755 fill_versions.sh vi fill_versions.sh Comment out this line
grep "^#FIXME" ${hue_dir}/VERSIONS &>/dev/null || exit
Save the file and restart Hue. The versions will now look correct from what is installed via the repository
Now fill_versions.sh will run when Hue starts populating the correct version everytime. There is also an issue with rolling upgrades and this because Hue does a rpm and greps the first line for the hadoop version. I modified my fill_versions.sh script to use 'hadoop version' and then awk the result.
... View more
09-29-2015
10:47 AM
11 Kudos
You need to log into the database for example Postgres: 1. Log on to ambari server host shell
2. Run 'psql -U ambari-server ambari'
3. Enter password 'bigdata'
4. In psql:
update ambari.users set user_password='538916f8943ec225d97a9a86a2c6ec0818c1cd400e09e03b660fdaaec4af29ddbb6f2b1033b81b00' where user_name='admin'
5. Quit psql
6. Run 'ambari-server restart' This will reset the admin account back to the password of 'admin'
... View more