Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 14999 | 03-08-2019 06:33 PM | |
| 6178 | 02-15-2019 08:47 PM | |
| 5098 | 09-26-2018 06:02 PM | |
| 12592 | 09-07-2018 10:33 PM | |
| 7446 | 04-25-2018 01:55 AM |
12-09-2016
07:36 PM
1 Kudo
@shyam gurram - If you are not using Falcon. Just remove it. You can always install it later
... View more
12-01-2016
08:28 AM
thanks for the reply jss. i have tried all what you have suggested already. but still getting the same issue. when i start the datanode through ambari ui follwoing error is occured, File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. /etc/profile: line 45: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
-bash: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
/usr/hdp/2.3.4.7-4/hadoop/libexec/hadoop-config.sh: line 155: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
starting datanode, logging to /data/log/hadoop/hdfs/hadoop-hdfs-datanode-.out
/usr/hdp/2.3.4.7-4//hadoop-hdfs/bin/hdfs.distro: line 30: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
/usr/hdp/2.3.4.7-4/hadoop/libexec/hadoop-config.sh: line 155: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh: line 187: /dev/null: Permission denied
... View more
11-28-2016
06:11 PM
3 Kudos
Here is the scenario: 1. I have workflow.xml which contains hive action. . 2. I have added <job-xml> tag inside hive action and provided path to the hive-site.xml (say /tmp/hive-site.xml) . 3. I have added hive-site.xml to the ${wf.application.path}/lib directory as well. . 4. I have hive-site.xml to the oozie sharelib under: /user/oozie/sharelib/lib_<timestamp>/oozie/hive-site.xml /user/oozie/sharelib/lib_<timestamp>/sqoop/hive-site.xml /user/oozie/sharelib/lib_<timestamp>/hive/hive-site.xml . 5. My simple hive workflow is failing with below error: Oozie Hive action configuration
=================================================================
Using action configuration file /hadoop/data01/hadoop/yarn/local/usercache/root/appcache/application_1443111597609_2691/container_1443111597609_2691_01_000002/action.xml
------------------------
Setting env property for mapreduce.job.credentials.binary to: /hadoop/data01/hadoop/yarn/local/usercache/root/appcache/application_1443111597609_2691/container_1443111597609_2691_01_000002/container_tokens
------------------------
------------------------
Setting env property for tez.credentials.path to: /hadoop/data01/hadoop/yarn/local/usercache/root/appcache/application_1443111597609_2691/container_1443111597609_2691_01_000002/container_tokens
------------------------
<<< Invocation of Main class completed <<<
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.HiveMain], main() threw exception, hive-site.xml (Permission denied)
java.io.FileNotFoundException: hive-site.xml (Permission denied)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at java.io.FileOutputStream.<init>(FileOutputStream.java:110)
at org.apache.oozie.action.hadoop.HiveMain.setUpHiveSite(HiveMain.java:166)
at org.apache.oozie.action.hadoop.HiveMain.run(HiveMain.java:196)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:38)
at org.apache.oozie.action.hadoop.HiveMain.main(HiveMain.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:225)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Oozie Launcher failed, finishing Hadoop job gracefully . How to resolve? . As we know that having multiple jar files with different version in oozie sharelib can cause classnotfound exceptions, in a similar way having multiple copies of configuration file also can cause conflicts. In this case Oozie might be trying to override hive-site.xml in nodemanager's local filesystem (/hadoop/yarn/local/usercache/<username>/appcache/application_id/blah/blah with one of the copy taken from sharelib or ${wf.application.path}/lib or from the <job-xml>. To resolve such conflicts, We have to remove extra copies of hive-site.xml from all the above mentioned location. Oozie uses hive-site.xml from /etc/oozie/conf/action-conf/hive/hive-site.xml 🙂 . To repeat, This issue was resolved by removing hive-site.xml from below locations:
1. oozie sharelib (it was present at multiple locations in oozie sharelib)
. 2. from ${wf.application.path}/lib/ directory.
. 3. From workflow.xml (removed <job-xml> part) . By default Oozie takes this file from /etc/oozie/conf/action-conf/hive/hive-site.xml . With Oozie nothing is Easy 😉 Please comment if you have any feedback/questions/suggestions. Happy Hadooping!! 🙂
... View more
Labels:
11-28-2016
01:37 PM
4 Kudos
Sometimes, after disabling Kerberos successfully, you can get below error while accessing NN or RM UIs. . Because of this, all the alerts, Ambari metrics data can be broken because JMX data won't be available to Ambari Metrics or Alerts via HTTP. . Resolution: Modify below propery in core-site.xml via Ambari and restart required services. hadoop.http.authentication.simple.anonymous.allowed=true Note - Most of the times issue happens because of this property being set as false after disabling Kerberos . Please also compare below property and see if it is correct.
hadoop.security.authentication=simple .Issue is simple but it takes lot of time to troubleshoot. Hence posting this article to reduce someone's time 🙂 Please comment if you face any issues or need any further help on this. Happy Hadooping!! 🙂
... View more
Labels:
04-15-2018
04:34 PM
may i know what action have you taken to get or change the passwd of hive metastore??
... View more
11-17-2016
09:56 AM
@Kuldeep Kulkarni..Thanks for your reply.i have found the job using command yarn application -list then killed the job using command yarn application -kill <application iD> and its works . 🙂
... View more
06-07-2017
02:03 AM
Hi @Kuldeep Kulkarni Thanks for this article. I had customer which had issue because if Maintenance mode is ON, Ambari *silently* fails to change the config. So could you update this article to make sure the Maintenance Mode is OFF, please?
... View more
01-28-2019
09:10 PM
You can find them at https://github.com/HortonworksUniversity/DevPH_Labs
... View more
11-01-2016
01:38 PM
4 Kudos
Step1: Please allow below ports from your OS firewall for Ambari. https://ambari.apache.org/1.2.5/installing-hadoop-using-ambari/content/reference_chap2_7.html . Step2: Please go though the required component and allow below ports from your OS firewall for HDP. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_HDP_Reference_Guide/content/accumulo-ports.html . Step 3: In order to allow YARN jobs to run successfully, we need to add custom TCP port range to the YARN configuration. Login to Ambari UI --> Select Mapreduce2 --> Configs --> Customer mapred-site --> Add/Modify below property yarn.app.mapreduce.am.job.client.port-range=32000-65000 Notes: 1. 32000-65000 is the port range which will be used by Application Master to connect to Node Managers.
. 2. You can increase the number of ports based on job volume. . How to add exception in Centos7 firewall? Example for Step 3. #firewall-cmd --permanent --zone=public --add-port=32000-65000/tcp #firewall-cmd --reload Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more
Labels: