Member since
01-08-2018
133
Posts
31
Kudos Received
21
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
17286 | 07-18-2018 01:29 AM | |
3096 | 06-26-2018 06:21 AM | |
5250 | 06-26-2018 04:33 AM | |
2707 | 06-21-2018 07:48 AM | |
2232 | 05-04-2018 04:04 AM |
07-18-2018
01:26 AM
Most probably, your system does not accept the certificates of repo1.maven..... You should check it and if needed to import certificates of the respective CA.
... View more
07-18-2018
01:22 AM
You can try using the API https://www.cloudera.com/documentation/enterprise/5-14-x/topics/cm_intro_api.html https://cloudera.github.io/cm_api/apidocs/v19/index.html
... View more
07-18-2018
01:19 AM
I cannot see any clear indication in the logs, other than connection to mysql failed for some reason. mysql java connector seem to be in place. Have you installed mysql with default options? Port 3306? If for example, you have used 3307, then you should also specify "-P 3307". I recommend to specify it, even if you are using the default port. Have you granted permissions to "temp" user according to instructions? PS: since your mysql is on localhost, you do not need to define another user and you can use root /usr/share/cmf/schema/scm_prepare_database.sh mysql -h localhost -u root -p --verbose scmdb scmuser
... View more
07-13-2018
12:20 AM
Regarding python 2. If your hive server is configured with SSL, then you should consider installing "sasl" package in python. As about python3, although this is a python question not hive related, usually the issue is on the previous lines, e.g. quotes or parentheses that do not terminate.
... View more
07-12-2018
06:26 AM
If it doesn't work, then probably the e-mail in action will not work. Try to send a mail from console: mail user@mail.address -s -s test_subject << EOF This is a test e-mail EOF There are various cases to send an e-mail. If you need an e-mail if an error is encountered, or action takes too much time, then you have to enable SLAs on this action and define the recipient. If you need an e-mail that the workflow executed successfully, then add a mail action just before the end of the workflow. If any previous action fails, the e-mail action will not executed. Unless you have modified the kill in one of your actions of course, and lead it to this e-mail action. You have multiple options to cover multiple scenarios.
... View more
07-12-2018
03:38 AM
You mean how the user can submit the job from HUE? If you save the file in HDFS as "workflow.xml", go to File browser using HUE. You will notice that if you select the check button of this file, you a "Submit" action button will appear. So the user can just hit it.
... View more
07-12-2018
01:57 AM
As you said, when it's executed, we don't know on which yarn node the command will be executed. So, with this storage bay mounted on each node, it doesn't matter on which node it's executed (I think :p) Correct Regarding the rest. First of all you don't have to be a scala developer to schedule a script in oozie 🙂 Your command should be "./runConsolidator.sh". Important is that your script has execute permissions and you define it in "Files". How it works: this shell action will be a YARN job, so YARN will create a temp folder e.g. "/yar/nm/some_id/another_id". All files defined in "Files" of this action, will automatically downloaded into this directory. This directory will be your working directory, so you should run your command with "./" in front, since by default, "./" is not defined in PATH. NOTE: If your script is using jar files etc. then you should define all of them in "Files", so they will copied to the working directory. I suggest to proceed with this approach. Setting the xml will mess things and you need some experience to do it and avoid mistakes. Once you create a working job from HUE, you can export the xml and start playing.
... View more
07-03-2018
05:09 AM
Hi, sorry if the reply is not very clear. It is written in hurry. I will try to expand it later. First of all HUE provides a very good interface to write oozie jobs and hardly you will need to write the job xml your own. You have a shell script with spark-submit. Spark submit will execute something (a jar or python file). When you define a "shell action", all files (used as parameters) should exist on the local filesystem (not in HDFS). If for example in a shel action you try "cat /etc/hosts" you will get the /etc/hosts file of the YARN node, where this shell action will be executed. If you have a file in HDFS (e.g. my_hosts) and you define it as "File" in shell action then oozie will download this file automatically into the working directory. Working directory is a random directory in YARN node's filesystem, which leaves while this yarn job is being executed. So, if you use the command "cat ./my_hosts", then wou will get the contents of the "my_hosts" which is downloaded to the working directory. In general is not very good idea to work with files on Slave Nodes, because you don't know on which yarn node this command will be executed each time. Unless you are sure tha you control it and you have deployed all required files to all nodes. Of course we are not discussing about temporary files that you may create during the execution, but for files with like metadata or configuration or files with results that you want to use after. IMHO it is always better to have these files in HDFS and send the results back to HDFS, so they will be easily accessible by other actions.
... View more
06-29-2018
03:10 AM
Just to understand, you don't want the spark job to use these files from HDFS but from your local system. If that is the case, then you can create a shell action as you have mentioned. First of all, put all required files on HDFS. Then define these files in the shell action. Oozie will automatically download these files to the working directory of the node that job will be executed. You don't have to manually distribute anything in advance to the nodes, oozie will take care of it, you just have to define these files in the job.
... View more
06-26-2018
06:21 AM
There is an issue with the hostname you have configured on the host. Can you make sure that "hostname -f" resolves to a valid FQDN?
... View more