Member since
04-16-2019
373
Posts
7
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
23937 | 10-16-2018 11:27 AM | |
7988 | 09-29-2018 06:59 AM | |
1225 | 07-17-2018 08:44 AM | |
6800 | 04-18-2018 08:59 AM |
10-03-2018
07:05 AM
@Anurag Mishra If the response answered your question can you take time an login and "Accept" the answer and close the thread so other members can use it as a solution
... View more
02-19-2019
12:37 PM
I got some error while RUNNING YARN-MapReduce example job. How I can set up "token-renewal.exclude" parameter for YARN jobs?
... View more
09-17-2018
06:26 AM
Hi @Anurag Mishra, It seems Tez unable to launch the session. first kill the all running applications and retry to lauch the job. if it doesn't work tune the tez configuration seetting by using below url : https://community.hortonworks.com/articles/14309/demystify-tez-tuning-step-by-step.html
... View more
09-03-2018
08:54 AM
1 Kudo
it started working i forgot to start ambari-agent .
... View more
07-30-2018
02:20 PM
You can use http://<rm http address:port>/ws/v1/cluster/metrics . This will retrieve the allocatedMB and allocatedVirtualCores at a given point in time. Refer http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Metrics_API
... View more
07-24-2018
08:23 AM
Hi @Anurag Mishra Prior to HDP 3 you could only see that an application was killed by a user, not who killed the application HDP 3 and onwards is more informative about who killed an application. [jsneep@node4 ~]$ yarn jar /usr/hdp/2.6.4.0-91/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 9000
18/07/24 07:44:44 INFO security.TokenCache: Got dt for hdfs://hwc1251-node2.hogwarts-labs.com:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 172.25.33.145:8020, Ident: (HDFS_DELEGATION_TOKEN token 7 for jsneep)
18/07/24 07:44:45 INFO input.FileInputFormat: Total input paths to process : 10
18/07/24 07:44:45 INFO mapreduce.JobSubmitter: number of splits:10
18/07/24 07:44:45 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1532417644227_0005
18/07/24 07:44:46 INFO impl.YarnClientImpl: Submitted application application_1532417644227_0005
18/07/24 07:44:46 INFO mapreduce.Job: Running job: job_1532417644227_0005
[root@hwc1251-node4 ~]# yarn application -kill application_1532417644227_0005
18/07/24 07:44:53 INFO mapreduce.Job: Job job_1532417644227_0005 failed with state KILLED due to: Application killed by user.
18/07/24 07:44:53 INFO mapreduce.Job: Counters: 0
Job Finished in 8.516 seconds
Ex, above I've submitted a yarn job (application_1532417644227_0005) & killed it. The logs state "Application killed by user." I can also browse the Resource Manager UI at http://<RM IP ADDRESS>:8088/cluster/apps/KILLED and see that it was killed by a user. The apache jira for this: https://issues.apache.org/jira/browse/YARN-5053 | "More informative diagnostics when applications killed by a user" In my HDP3 cluster, when I submit an identical job and kill it; [root@c2175-node4 ~]# yarn app -kill application_1532419910561_0001
18/07/24 08:12:45 INFO client.RMProxy: Connecting to ResourceManager at c2175-node2.hwx.com/172.25.39.144:8050
18/07/24 08:12:45 INFO client.AHSProxy: Connecting to Application History server at c2175-node2.hwx.com/172.25.39.144:10200
Killing application application_1532419910561_0001
For example now via the RM UI, I can browse to http://<RM IP ADDRESS>:8088/ui2/#/yarn-app/application_x_/info and under diagnostics we will see the user and source address of the kill operation. The same would be visible through CLI, via "yarn app -status application_1532419910561_0001 | grep killed" Application application_1532419910561_0001 was killed by user root at 172.25.33.15 Edit: PS, you could make use of YARN Queues & ACLs to limit / determine who has rights to kill yarn applications, I wanted to mention this in case you're looking for something to help you if you're currently unable to get your cluster upgraded to HDP3. Further info: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_yarn-resource-management/content/controlling_access_to_queues_with_acls.html *please mark this answer as accepted if you found this helpful*
... View more
07-23-2018
06:21 PM
See this link: https://community.hortonworks.com/articles/4103/hiveserver2-jdbc-connection-url-examples.html
... View more
07-17-2018
08:44 AM
I have solved the issue , actually resource path was not correct . If the path is incorrect admin can see the policy but delegate admin would not be able to do so .
... View more
07-15-2018
03:13 PM
2 Kudos
@Anurag Mishra du -sh /hadoop/hdfs/data shows the space used and not the space available. You should check for the space available on the directory. You can check the available space using df -h command. # df -h /hadoop/hdfs/data Also to know the available space for HDFS you can use hdfs command, which should show the available HDFS space : #hdfs dfs -df -s / And to add more space with one datanode you should either add space to underlaying filesystem where /hadoop/hdfs/data is mounted or create additional filesystem something like /hadoop/hdfs/data1 and configure datanode dir(dfs.datanode.data.dir) to have two directory paths in comma separated format. You can also add HDFS space by adding another datanode to the cluster.
... View more
07-15-2018
12:22 AM
@Anurag Mishra I guess the command the following command was incorrectyou are storing Private key instead of Public key file. # cd .ssh
# cat id_rsa >> authorized_keys . You should be passing the "id_rsa.pub" file instead like following # cd .ssh
# cat id_rsa.pub >> authorized_keys
(OR)
# ssh-copy-id -i ~/.ssh/id_rsa.pub $HOST . Additionally can you please verify if the "hostname" which you are passing to the SSH command is correct? Please run the following command to verify the hostname. # hostname
# hostname -f . Then after verifying the hostname please run the below command. # ssh-copy-id -i ~/.ssh/id_rsa.pub $HOST .
... View more