Member since
03-16-2016
707
Posts
1753
Kudos Received
203
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 6962 | 09-21-2018 09:54 PM | |
| 8721 | 03-31-2018 03:59 AM | |
| 2613 | 03-31-2018 03:55 AM | |
| 2754 | 03-31-2018 03:31 AM | |
| 6174 | 03-27-2018 03:46 PM |
02-21-2017
09:18 PM
3 Kudos
@Guillaume Roger Your steps are correct. Please be advised that the Compressed field in your DESCRIBED FORMATTED is not a reliable indicator of whether the table contains compressed data. It typically shows No, because the compression settings only apply during the session that loads data and are not stored persistently with the table metadata. The compression in desc formatted may be input or intermediate compression rather than output. Look at the actual files as they are stored for the Hive table in question. *** If this cleared the dilemma, please vote and accept it as the best answer.
... View more
02-15-2017
10:04 PM
2 Kudos
@Vedant Biyani Unfortunately, there is nothing in Ambari to help with monitor disk failures in the way you described. Usually, this is done with a different enterprise software, e.g. OpenView, BMC etc. As you already mentioned, the failure tolerance for disks is configurable via dfs.datanode.failed.volumes.tolerated, but that marks all node as failed and that is a waste of space and time to rebalance data. It is good to know as soon as you have one drive failed. If you can't use one of the specialized software to monitor disks, one workaround would be to set the "DataNode Health Summary" alert threshold that will alert you on the first data node. ===== If this response is helpful, please vote and accept.
... View more
02-15-2017
09:25 PM
2 Kudos
@P D It is probably related to https://issues.apache.org/jira/browse/AMBARI-14384. Good link: https://github.com/grafana/grafana/issues to monitor community issues with Grafana. The only Grafana Community tickets that are close to yours: https://github.com/GridProtectionAlliance/osisoftpi-grafana/issues/2 https://github.com/grafana/grafana/issues/7550 Later was closed prematurely. If you could document your issue a little bit more, you could also post the question on grafana community.
... View more
01-25-2017
01:40 AM
2 Kudos
@Nasheb Ismaily Yes. It is expected for FIFO policy. If you set a FIFO policy, then jobs are executed in the order you submitted them. You have the option to use FAIR policy. In that case, all jobs can be executed sharing fairly available resources and they don't have to wait one after the other. They will still start in the order you submitted, but based on what they do, they may finish in a different order. That assumes your cluster has enough resources and by design you wanted to go that way. I did not include references to various documents because they were already provided and are widely available.
... View more
01-07-2017
04:58 AM
Ralph Adekoya Run exec bash and try your hadoop dfsadmin commands.
... View more
12-30-2016
03:39 AM
2 Kudos
@Prakash Punj I just tested it. It is easy to reproduce. Stop YARN. Move the NameNode in Safe Mode and take a checkpoint. Attempt to Start YARN. It won't and the error is exactly the one described in the article. Take NameNode out of Safe Mode. Start YARN. All is good now! Please vote/accept.
... View more
12-30-2016
03:35 AM
3 Kudos
@Prakash Punj You need to take the NameNode out of the Safe Mode. This is a common error encountered when the NameNode is moved to Safe Mode to take a checkpoint. Documentation instructions should probably be updated to add a warning. sudo su hdfs -l -c 'hdfs dfsadmin -safemode leave'
... View more
12-29-2016
06:15 AM
1 Kudo
This is an issue with Ambari version prior to 2.2.0. The article should have clarified it. The JIRA specifies that it is fixed in 2.2.0, however, search engines will not pull this link in searches as such the exposure of the article is extremely limited.
... View more
12-29-2016
12:27 AM
3 Kudos
@Ken Jiiii You could create another file and call it from h_script.hql, or you just add lines to h_script.hql like the following: This set statements will override the global settings for that specific job session. Example of line: set mapred.reduce.tasks=32;
Almost anything that is Hive global environments XML is game for override at session level. Look at this post: https://community.hortonworks.com/articles/22419/hive-on-tez-performance-tuning-determining-reducer.html This shows a lot of these set statements which you can include in your set_h_script.hql invoked from h_script.hql or in the block added the beginning of your h_script.hql. Personally, I prefer a separate file invoked from my job script.
... View more