Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
11341 | 03-08-2019 06:33 PM | |
4838 | 02-15-2019 08:47 PM | |
4141 | 09-26-2018 06:02 PM | |
10512 | 09-07-2018 10:33 PM | |
5567 | 04-25-2018 01:55 AM |
02-16-2016
06:39 AM
1 Kudo
@hyadav
... View more
02-16-2016
06:38 AM
1 Kudo
@Karthik Gopal
... View more
02-15-2016
12:51 PM
11 Kudos
Step by step guide - Shell action in oozie workflow via Hue Step1: Create a sample shell script and upload it to hdfs
[root@sandbox shell]# cat ~/sample.sh
#!/bin/bash
echo "`date` hi" > /tmp/output
hadoop fs -put sample.sh /user/hue/oozie/workspaces/
[root@sandbox shell]# hadoop fs -ls /user/hue/oozie/workspaces/
-rw-r--r-- 3 root hdfs 44 2016-02-15 10:26 /user/hue/oozie/workspaces/sample.sh Step 2: Login to Hue Web UI and select Oozie editor/dashboard Step3: Goto "Workflows" tab and click on "Create" button Step 4: Fill in the required details and click on save button Step 5: Drag shell action between start and end phase Step 6: Fill in the required details about shell action and click on "Done" button Step 7: Submit your workflow You will see that your job is in progress Output 1 Output 2
... View more
Labels:
02-15-2016
09:35 AM
3 Kudos
@Ian Roberts I believe we can do something like this: For example if you are running spark shell then you can add below configurations in core-site.xml and run your job with --proxy-user <username> <property>
<name>hadoop.proxyuser.<username>.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.<username>.groups</name>
<value>*</value>
</property>
Command to run spark shell with YARN with proxy user:
spark-shell --master yarn-client --proxy-user <username>
... View more
02-12-2016
09:02 AM
4 Kudos
@Peter Coates I know this not related to the original question. Once you install Hadoop below things can help you to tune your Hadoop cluster. Tune Hadoop Cluster to get Maximum Performance (Part 1) Tune Hadoop Cluster to get Maximum Performance (Part 2)
... View more
02-11-2016
03:05 AM
6 Kudos
When user submits job via Spark/Samza to Yarn, job gets executed as "yarn" user, how can we make sure that job should run as same user who has submitted the job. Please advise.
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
02-11-2016
01:58 AM
1 Kudo
Thank you @Neeraj Sabharwal
... View more
02-04-2016
03:00 AM
6 Kudos
@Neeraj Sabharwal - Thanks! I have installed hdp cluster using Ambari blueprints and I could see that Ambari has only installed and started the services. for smoke tests I have added useful stuff here
... View more
02-03-2016
04:42 AM
4 Kudos
Thank you so much @vsharma here is the test results Smoke
test [root@sandbox ~]# curl -u admin:admin -i -H 'X-Requested-By: ambari' -X POST -d '{"RequestInfo": {"context" :"YARN Service Check","command":"YARN_SERVICE_CHECK"},"Requests/resource_filters":[{"service_name":"YARN"}]}' http://127.0.0.1:8080/api/v1/clusters/Sandbox/requestsHTTP/1.1 202
AcceptedUser: adminSet-Cookie:
AMBARISESSIONID=1vg24zix87lkmi53hpjl4krvk;Path=/;HttpOnlyExpires: Thu, 01 Jan
1970 00:00:00 GMTContent-Type:
text/plainVary:
Accept-Encoding, User-AgentContent-Length: 137Server:
Jetty(8.1.17.v20150415){ "href" : "http://127.0.0.1:8080/api/v1/clusters/Sandbox/requests/87", "Requests" : { "id" : 87, "status" : "Accepted" }}[root@sandbox ~]#
Track
status of above smoke test curl -u admin:admin -i -H 'X-Requested-By: ambari' -X GET http://127.0.0.1:8080/api/v1/clusters/Sandbox/requests/87 Note - request number can be found from http://127.0.0.1:8080/api/v1/clusters/Sandbox/requests url, we should pickup the last request id to check the status.
... View more
02-03-2016
02:04 AM
4 Kudos
Based on all the discussion, this is expected behavior. Even after giving full permissions via ranger, only superuser can modify ownership.
... View more