Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
10991 | 03-08-2019 06:33 PM | |
4766 | 02-15-2019 08:47 PM | |
4080 | 09-26-2018 06:02 PM | |
10398 | 09-07-2018 10:33 PM | |
5479 | 04-25-2018 01:55 AM |
12-02-2020
11:17 AM
@bvishal I think it's better to allow Ambari port in your firewall rules. You can install the cluster with the blueprints(not very straightforward as you will be having custom configs) however monitoring and cluster maintenance will be difficult. If security is a concern, you can always configure Ambari with Knox gateway or with SSL etc.
... View more
03-08-2019
07:38 PM
@n c For some reason, I'm not able to add a comment in this post so replying as an answer: For CDH, you can use Hue to design the submit the workflow. For frequency - here is you can change the frequency of coordinator as per your requirement frequency="${coord:days(1)}"
... View more
03-08-2019
06:33 PM
1 Kudo
@n c job.properties -> Needs to be on local filesystem on Oozie client from where you wish to submit the oozie job -run command. No need to have this file on HDFS. workflow.xml -> This needs to be on HDFS. script -> This needs to be on HDFS. Regarding your question on python script - Please refer below article for how to create coordinator ( Please ignore input event part ) https://community.hortonworks.com/articles/27497/oozie-coordinator-and-based-on-input-data-events.html in coordinator.xml you mention "<app-path>${workflowAppUri}</app-path>" that's what is location of workflow.xml Please use Ambari workflow manager to design coordinators and workflows with easy WebUI. Hope this helps. Please let me know if you have any questions. Please accept my answer if it was helpful. 🙂
... View more
03-06-2019
07:01 PM
1 Kudo
Hi @n c Can you please check on the node manager where this launcher was run, ideally this file should be created locally on the node manager where your shell script was run. Please do let me know if you need any further help.
... View more
02-15-2019
08:47 PM
1 Kudo
Please modify below script on ambari-server and replace cached copy of the same script with the modified script on each ambari-agent ( where you have oozie clients installed ) Script to be modified on Ambari Server: /var/lib/ambari-server/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/service_check.py What to modify? Add below line: from resource_management import * Where is cached copy on ambari-agents/oozie-clients? /var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/service_check.py After this, you may face below error: resource_management.core.exceptions.Fail: Cannot find /usr/hdp/current/oozie-client/doc. Possible reason is that /etc/yum.conf contains tsflags=nodocs which prevents this folder from being installed along with oozie-client package. If this is the case, please fix /etc/yum.conf and re-install the package. To fix this, Edit /etc/yum.conf and remove/comment out tsflag=nodocs Run below commands to reinstall oozie packages. yum reinstall -y oozie_2_6_* Hope this helps! 🙂
... View more
10-26-2018
07:00 PM
Oozie job submit fails in HDP-3.0 with below error: Error: E0723 : E0723: Unsupported action type, node [hive] type [org.apache.oozie.service.ActionService] . It looks like failure is because Oozie is not aware of Hive action type. When I checked value of oozie.service.ActionService.executor.ext.classes in oozie-site.xml, I found that it was set to below value: <property>
<name>oozie.service.ActionService.executor.ext.classes</name>
<value> org.apache.oozie.action.email.EmailActionExecutor, org.apache.oozie.action.hadoop.ShellActionExecutor, org.apache.oozie.action.hadoop.SqoopActionExecutor, org.apache.oozie.action.hadoop.DistcpActionExecutor</value>
</property> Note - Hive action type and other supported action types like Spark/Spark2 is missing in this. I'm still working with our engineering to find out why this is happening with recent Oozie version, earlier it used to work without any issues. I will keep you posted on this. . To fix this issue: Please modify oozie.service.ActionService.executor.ext.classes to have org.apache.oozie.action.hadoop.HiveActionExecutor class. Modified property: <property>
<name>oozie.service.ActionService.executor.ext.classes</name>
<value> org.apache.oozie.action.email.EmailActionExecutor, org.apache.oozie.action.hadoop.ShellActionExecutor, org.apache.oozie.action.hadoop.SqoopActionExecutor, org.apache.oozie.action.hadoop.DistcpActionExecutor, org.apache.oozie.action.hadoop.HiveActionExecutor</value>
</property> After you modify this, please restart Oozie services via Ambari and try to resubmit your workflow. It should work without any issue. . I hope this will save your valuable time in troubleshooting! . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more
10-23-2018
08:59 PM
@Sara Alizadeh - Command is same as what you have given in the question. Can you please let us know what changed to fix this?
... View more
10-19-2018
08:57 PM
When Oozie launcher(map only mapreduce job) gets scheduled to run on RHEL7 node in a mixed OS environment, it may get failed with below ERROR(stderr section of Oozie launcher logs): Container: container_e1XX_XXXXXXX_0X_00000X on XXXXXX_XXX_XXXXXX
LogAggregationType: AGGREGATED
===============================================================================================================
LogType:stderr
Log Upload Time:Tue XXXXXXXXXXXX
LogLength:XX
Log Contents:
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
/usr/bin/env: bash: No such file or directory
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.impl.MetricsSystemImpl).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. . Why it failed? This happens because "/bin" and "/sbin" missing in your $PATH in container launch environment. $PATH variable gets derived from nodemanager's env and nodemanager get's the env from ambari-agent's /var/lib/ambari-agent/ambari-env.sh. . How to fix this? To fix this, add "/bin" and "/sbin" in /var/lib/ambari-agent/ambari-env.sh, restart ambari-agent followed by nodemanager restart. . Note - It may get failed with "ln: command not found" error, please follow the same resolution mentioned above in this case as well. . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more
Labels:
10-11-2018
06:28 PM
Troubleshooting Oozie job is a pain! It kills your time and patience 🙂 . Here are few steps which can save your valuable time: . 1. Always check Oozie launcher's stderr section to see if there is any error. Please find an useful article here to see how to check Oozie launcher logs. . 2. Check stdout logs to see if Oozie has launched any child job which has some error and because of which launcher got failed. Expand the stdout section and search for string "Submitted application" to see what all child jobs got triggered by launcher. . 3. Few situations are complex to troubleshoot. Child job gets completed successfully. There is no error in the stderr section and still your launcher gets failed with "Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1]" error. . Sample stdout logs: 2016-12-06 09:03:39,986 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1367)) - map 100% reduce 0%
2016-12-06 09:03:39,991 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1378)) - Job job_XXXXXX_YYYY completed successfully
.
.
.
2016-12-06 09:03:40,228 DEBUG [main] hive.TableDefWriter (TableDefWriter.java:getLoadDataStmt(252)) - Load statement: LOAD DATA INPATH 'hdfs://XXXXXXX' OVERWRITE INTO TABLE `XXXXXX`
65695 [main] INFO org.apache.sqoop.hive.HiveImport - Loading uploaded data into Hive
2016-12-06 09:03:40,229 INFO [main] hive.HiveImport (HiveImport.java:importTable(195)) - Loading uploaded data into Hive
.
65711 [main] DEBUG org.apache.sqoop.hive.HiveImport - Using in-process Hive instance.
2016-12-06 09:03:40,245 DEBUG [main] hive.HiveImport (HiveImport.java:executeScript(326)) - Using in-process Hive instance.
[Loaded org.apache.sqoop.util.SubprocessSecurityManager from file:/dataXXX/hadoop/yarn/local/filecache/693/sqoop-1.4.6.2.3.4.0-3485.jar]
[Loaded org.apache.sqoop.util.ExitSecurityException from file:/dataXXX/hadoop/yarn/local/filecache/693/sqoop-1.4.6.2.3.4.0-3485.jar]
[Loaded com.cloudera.sqoop.util.ExitSecurityException from file:/dataXXX/hadoop/yarn/local/filecache/693/sqoop-1.4.6.2.3.4.0-3485.jar]
65714 [main] DEBUG org.apache.sqoop.util.SubprocessSecurityManager - Installing subprocess security manager
2016-12-06 09:03:40,248 DEBUG [main] util.SubprocessSecurityManager (SubprocessSecurityManager.java:install(59)) - Installing subprocess security manager
[Loaded org.apache.hadoop.hive.ql.metadata.HiveException from file:/dataXXX/hadoop/yarn/local/filecache/778/hive-exec-1.2.1.2.3.4.0-3485.jar]
[Loaded org.apache.hadoop.hive.ql.security.authorization.plugin.HiveMetastoreClientFactory from file:/dataXXX/hadoop/yarn/local/filecache/778/hive-exec-1.2.1.2.3.4.0-3485.jar]
.
.
.
[Loaded org.apache.oozie.action.hadoop.JavaMainException from file:/dataXXX/hadoop/yarn/local/filecache/365/oozie-sharelib-oozie-4.2.0.2.3.4.0-3485.jar]
[Loaded org.apache.oozie.action.hadoop.LauncherMainException from file:/dataXXX/hadoop/yarn/local/filecache/365/oozie-sharelib-oozie-4.2.0.2.3.4.0-3485.jar]
Intercepting System.exit(1)
<<< Invocation of Main class completed <<<
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1]
Oozie Launcher failed, finishing Hadoop job gracefully . How to troubleshoot this? By default, when Yarn application gets finished, nodemanager deletes temporary data from local container directories. In case of above issue, we will have to retain it for some time and check hive.log inside container directory. Below are the detailed steps to do this: 1. Please add below property in yarn-site.xml to retain container directory after application is finished.
yarn.nodemanager.delete.debug-delay-sec=1800 ( I have set it for 30 minutes. you can change the value as per your convenience )
2. Restart required services via Ambari.
3. Rerun the Oozie job.
4. Goto the failed launcher job logs and find the Node manager where launcher was run ( which is failed )
5. Expand launch container section of the application logs.
6. Find value of PWD
7. Login to the node manager and cd to $PWD ( value obtained in step 6 )
8. find file with name hive.log inside container's directory e.g. find . -name hive.log
9. hive.log should have actual error which is not visible in application logs. . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more
Labels:
10-10-2018
11:57 PM
This has been tested on Ambari 2.6.2.0 and DLM 1.1.2.0 If there is broken symlink or unwanted directory under /var/lib/ambari-server/resources on Ambari Server, you get below error while installing mpack(management pack) for Beacon service. [root@XXXXXX ~]# ambari-server install-mpack --mpack /root/beacon-ambari-mpack-1.1.2.0-37.tar.gz --verbose
Using python /usr/bin/python
Installing management pack
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Installing management pack /root/beacon-ambari-mpack-1.1.2.0-37.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Download management pack to temp location /var/lib/ambari-server/data/tmp/beacon-ambari-mpack-1.1.2.0-37.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Expand management pack at temp location /var/lib/ambari-server/data/tmp/beacon-ambari-mpack-1.1.2.0-37/
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Stage management pack beacon-engine.mpack-1.1.0.0 to staging location /var/lib/ambari-server/resources/mpacks/beacon-engine.mpack-1.1.0.0
INFO: Processing artifact BEACON-common-services of type service-definitions in /var/lib/ambari-server/resources/mpacks/beacon-engine.mpack-1.1.0.0/common-services
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Symlink: /var/lib/ambari-server/resources/common-services/BEACON/1.1.0
INFO: Processing artifact BEACON-addon-services of type stack-addon-service-definitions in /var/lib/ambari-server/resources/mpacks/beacon-engine.mpack-1.1.0.0/addon-services
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
Traceback (most recent call last):
File "/usr/sbin/ambari-server.py", line 952, in <module>
mainBody()
File "/usr/sbin/ambari-server.py", line 922, in mainBody
main(options, args, parser)
File "/usr/sbin/ambari-server.py", line 874, in main
action_obj.execute()
File "/usr/sbin/ambari-server.py", line 78, in execute
self.fn(*self.args, **self.kwargs)
File "/usr/lib/ambari-server/lib/ambari_server/setupMpacks.py", line 896, in install_mpack
(mpack_metadata, mpack_name, mpack_version, mpack_staging_dir, mpack_archive_path) = _install_mpack(options, replay_mode)
File "/usr/lib/ambari-server/lib/ambari_server/setupMpacks.py", line 794, in _install_mpack
process_stack_addon_service_definitions_artifact(artifact, artifact_source_dir, options)
File "/usr/lib/ambari-server/lib/ambari_server/setupMpacks.py", line 554, in process_stack_addon_service_definitions_artifact
sudo.symlink(source_service_version_path, dest_link)
File "/usr/lib/ambari-server/lib/resource_management/core/sudo.py", line 124, in symlink
os.symlink(source, link_name)
OSError: [Errno 17] File exists Please follow below steps to fix this: 1. Make sure that there is no backup directory under /var/lib/ambari-server/resources like common-services.backup or stacks.old etc. If exists, please move it some other location. . 2. Delete or move below directories to other location /var/lib/ambari-server/resources/common-services/BEACON
/var/lib/ambari-server/resources/mpacks . 3. Check if there is any broken symlink for BEACON under stacks directory. If exists, unlink it. unlink /var/lib/ambari-server/resources/stacks/HDP/2.6/services/BEACON . 4 Reinstall mpack using command mentioned in the Hortonworks docs. e.g. ambari-server install-mpack --mpack /root/beacon-ambari-mpack-1.1.2.0-37.tar.gz --verbose . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more
Labels: