Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 15001 | 03-08-2019 06:33 PM | |
| 6178 | 02-15-2019 08:47 PM | |
| 5098 | 09-26-2018 06:02 PM | |
| 12599 | 09-07-2018 10:33 PM | |
| 7446 | 04-25-2018 01:55 AM |
11-11-2016
01:35 PM
3 Kudos
@Ashley Galvan If you are able to do ssh from Oozie server to docker node via 2222 then its possible using oozie's ssh action. please refer https://community.hortonworks.com/articles/7413/oozie-ssh-action.html Note - you might need to add oozie.action.ssh.command.port to 2222 in oozie-site.xml in order to get this stuff working. Please try this and let me know how it goes.
... View more
11-11-2016
01:06 PM
3 Kudos
@ARUN You can redirect console output to some file --> grep application ID from that output file --> use yarn command get the job information #Run job [hdfs@prodnode1 ~]$ /usr/hdp/current/hadoop-client/bin/hadoop jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-2.7.1.2.4.2.0-258.jar pi 10 10 1>/tmp/op 2>/tmp/op & #Grep Application ID [hdfs@prodnode1 ~]$ grep 'Submitted application' /tmp/op |rev|cut -d' ' -f1|rev
application_1478509018160_0003
[hdfs@prodnode1 ~]$ #Get status [hdfs@prodnode1 ~]$ yarn application -status application_1478509018160_0003
16/11/11 13:06:07 INFO impl.TimelineClientImpl: Timeline service address: http://prodnode3.openstacklocal:8188/ws/v1/timeline/
16/11/11 13:06:07 INFO client.RMProxy: Connecting to ResourceManager at prodnode3.openstacklocal/172.26.74.211:8050
Application Report :
Application-Id : application_1478509018160_0003
Application-Name : QuasiMonteCarlo
Application-Type : MAPREDUCE
User : hdfs
Queue : default
Start-Time : 1478869426329
Finish-Time : 1478869463505
Progress : 100%
State : FINISHED
Final-State : SUCCEEDED
Tracking-URL : http://prodnode3.openstacklocal:19888/jobhistory/job/job_1478509018160_0003
RPC Port : 42357
AM Host : prodnode1.openstacklocal
Aggregate Resource Allocation : 129970 MB-seconds, 228 vcore-seconds
Log Aggregation Status : SUCCEEDED
Diagnostics :
[hdfs@prodnode1 ~]$ Hope this information helps! 🙂
... View more
11-11-2016
12:56 PM
4 Kudos
@Saikiran Parepally Even if session gets timedout, YARN will retry attempt upto 3 more times and application will get killed if all 4 attempts are failed. That might be the reason why your YARN application runs longer. Please refer below thread for more information. https://qnalist.com/questions/4398360/how-to-terminate-a-running-hive-query-executed-with-jdbc-hive-server-2
... View more
11-11-2016
12:19 PM
3 Kudos
@Imtiaz Yousaf Can you please check if hive-exec-<version>.jar file is exists in your Oozie sharelib? e.g. [root@prodnode3 ~]# hadoop fs -ls /user/oozie/share/lib/lib_20160926083442/hive/|grep exec
-rw-r--r-- 3 oozie hdfs 20755003 2016-09-26 08:34 /user/oozie/share/lib/lib_20160926083442/hive/hive-exec-1.2.1000.2.4.2.0-258.jar
[root@prodnode3 ~]# If not the please try to re-generate oozie sharelib using below command. #Command 1 /usr/hdp/<version>/oozie/bin/oozie-setup.sh
sharelib create -locallib /usr/hdp/<version>/oozie/oozie-sharelib.tar.gz -fs
hdfs://<active-nn>:8020 #Command 2 oozie admin -oozie http://localhost:11000/oozie
-sharelibupdate Please run above command on oozie server as 'oozie' user.
... View more
11-10-2016
03:56 PM
3 Kudos
Below are the steps to move Namenode from one machine to another via Ambari . These steps have been successfully tested for below version of Ambari and HDP Ambari - 2.2.2.X HDP - 2.2.X/2.3.X/2.4.X . Step 1: Ensure that all the HDFS Components are up and
running . Step 2: Select HDFS Services à
Click on Service Actions on the top right à
Click on Move Namenode
. . Step 3: Read the instructions carefully and click on Next
. Step 4: Select new target for your Standby NN à Click Next
. . Step 5: Review the changes and click on Deploy
. Step 6: This wizard will stop required services, setup new
NN + ZFKC, Disable HDFS on earlier
Namenode and Start services on New Namenode. Once this step is done à Click Next
. . Step 7: Follow the
Manual steps given in ‘Manual Commands’ section . . Step 8 – Confirm that you have performed all the manual
steps and click on OK
. Step 9 – Once confirmed, Ambari will delete earlier Standby
NN and Start all the Services
. . . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more
Labels:
11-08-2016
02:36 PM
1 Kudo
@Anindya Chattopadhyay Can you please try ssh root@sandbox -p 2222
... View more
11-04-2016
03:16 PM
3 Kudos
@Daniel Scheiner In HDP, we have hive.execution.engine set to TEZ by default, hence all the complex queries gets executed via TEZ. If you want, you can modify this property to 'mr'. Hope this information helps.
... View more
11-04-2016
03:12 PM
5 Kudos
@Sivasaravanakumar K As mentioned by @Geoffrey Shelton Okot, you can install minimal version of Hadoop to get Oozie working! For java thing. it's not compulsory to write mapreduce code. You can write your code as per your requirements --> Keep commands to run java code in a simple shell script --> execute that shell script via Oozie using shell action. Hope this information helps!
... View more
11-01-2016
01:38 PM
4 Kudos
Step1: Please allow below ports from your OS firewall for Ambari. https://ambari.apache.org/1.2.5/installing-hadoop-using-ambari/content/reference_chap2_7.html . Step2: Please go though the required component and allow below ports from your OS firewall for HDP. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_HDP_Reference_Guide/content/accumulo-ports.html . Step 3: In order to allow YARN jobs to run successfully, we need to add custom TCP port range to the YARN configuration. Login to Ambari UI --> Select Mapreduce2 --> Configs --> Customer mapred-site --> Add/Modify below property yarn.app.mapreduce.am.job.client.port-range=32000-65000 Notes: 1. 32000-65000 is the port range which will be used by Application Master to connect to Node Managers.
. 2. You can increase the number of ports based on job volume. . How to add exception in Centos7 firewall? Example for Step 3. #firewall-cmd --permanent --zone=public --add-port=32000-65000/tcp #firewall-cmd --reload Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more
Labels:
10-31-2016
03:34 PM
3 Kudos
@Volodymyr Ostapiv Can you please refer below thread https://community.hortonworks.com/questions/33127/i-cant-add-new-services-into-ambari.html It looks like permission issue. Can you please try to change owner of /var/run/ambari-server to the user by which ambari-server daemon is running? Please do let me know if you need any further help.
... View more