Member since
10-24-2015
207
Posts
18
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4379 | 03-04-2018 08:18 PM | |
4299 | 09-19-2017 04:01 PM | |
1795 | 01-28-2017 10:31 PM | |
970 | 12-08-2016 03:04 PM |
07-29-2021
12:21 AM
i just suffered from this. you should change the parameter in hdfs-site.xml <property>
<name>dfs.block.invalidate.limit</name>
<value>50000</value>
</property> the default value is 1000 , which is too slow may be you should also change report size, if you have exception about that <property>
<name>ipc.maximum.data.length</name>
<value>1073741824</value>
</property>
... View more
08-10-2020
06:12 AM
Hi All, is there specific method to follow for installing ambari on python3..any one installed on python3 base
... View more
05-28-2019
11:27 PM
The above was originally posted in the Community Help Track. On Tue May 28 23:19 UTC 2019, a member of the HCC moderation staff moved it to the Security track. The Community Help Track is intended for questions about using the HCC site itself.
... View more
12-31-2018
03:13 AM
@Jay Kumar SenSharma It works, Great!! There was a typo in your command. This works for me:
http://$AMS_COLLECTOR_HOSTNAME:6188/ws/v1/timeline/metrics?metricNames=yarn.QueueMetrics.Queue=root.default.AvailableMB._max&appId=resourcemanager&startTime=1545613074&endTime=1546217874 Thanks again.
... View more
11-12-2018
10:53 PM
If you would like to just check if your hive server2 port is accepting connections or not you can run checks like below. nc -zv <hiveserver2 hostname> 10000
Connection to localhost 10000 port [tcp/ndmp] succeeded!
hive metastore connections: netstat -all|grep 9083|wc -l
314 But note above doesn't guarantee whether your hive service is really running any tasks or not. Best way for figuring that out is to enable hive metrics which you can poll every few minutes as per your requirement. If you have Grafana setup it should help as well. ps: I'm not sure what distribution you are using, however, if you are on HDP, ambari already has run service check option which you can see if you can invoke it via rest api.
... View more
10-17-2018
12:11 PM
@PJ Since you have ranger enabled, its possible that your permission is denied at Ranger end. I would definitely check the Ranger Audit logs for any events for the users and see if we are hitting the permission denied in there. Also I would add a ranger hdfs policy to allow user user1 write access to /user/user1/sparkeventlogs once I validate it was Ranger who was blocking the permissions.
... View more
04-01-2018
04:05 PM
@Aishwarya Sudhakar You need to understand the HDFS directory structure. This is the one which is causing issues to you. Follows some explanation. Let's say the username for these example commands is ash. So when ash tries to create a directory in HDFS with the following command /user/ashhadoop fs -mkdir demo
//This creates a directory inside HDFS directory
//The complete directory path shall be /user/ash/demo it is different than the command given below hadoop fs -mkdir /demo
//This creates a directory in the root directory.
//The complete directory path shall be /demo So a suggestion here is, whenever you try to access the directories, use the absolute path(s) to avoid the confusion. So in this case, when you create a directory using hadoop fs -mkdir demo and loads the file to HDFS using hadoop fs -copyFromLocal dataset.csv demo You file exists at /user/ash/demo/dataset.csv
//Not at /demo So your reference to your spark code for this file should be sc.textFile("hdfs://user/ash/demo/dataset.csv") Hope this helps!
... View more
04-04-2018
04:11 PM
@dthakkar @Sindhu yes, i did mention the -Dmapreduce.job.queuename=<queue_name> already but 2 applications run if you look at the yarn jobs list, first one uses the mentioned queue in the above property and second job uses default queue. I have no idea why it lauches 2 separate jobs. i resolved this my configuring queue mappings and increasing the am resource percent. Thanks.
... View more
03-07-2018
05:09 AM
@Aymen Rahal The issue is due to 'Connection refused on the default ssh port'. Verify the following: 1. Check the ssh port under file /etc/ssh/sshd_config, if not set try setting to 22. 2. Try running ssh to the host from terminal.
... View more