Member since
06-20-2016
308
Posts
103
Kudos Received
29
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1934 | 09-19-2018 06:31 PM | |
1434 | 09-13-2018 09:33 PM | |
1393 | 09-04-2018 05:29 PM | |
4369 | 08-27-2018 04:33 PM | |
3456 | 08-22-2018 07:46 PM |
09-13-2018
10:35 PM
While Upgrading/Downgrading HDP in Ambari, if you see "Unable to determine the stack and stack version" it is possible the repo version has some inconsistency. Post Ambari2.6.x, should have repo_version table "resolved" column as 1 - If it shows "0" then run below update query. update repo_version set resolved = 1 Re-start Ambari and then attempt the operation again - it should go through. Note: As of now it is not a BUG as we could not able to reproduce the issue.
... View more
Labels:
09-13-2018
09:33 PM
@Jorge Luis Hernandez Olmos If you are using Ambari - you can find that from task logs in the UI.
... View more
09-04-2018
06:24 PM
@Michael Bronson What kind of corruption is that? file is incomplete or less size that it should be?
... View more
09-04-2018
05:29 PM
@Manoj Nirale In Ambari there is nothing like setting connection timeout for 10hours. You mean to say services should not impact when hosts come back? You can use Auto-start feature in Ambari at https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.1.5/bk_ambari-operations/content/enable_service_auto_start.html With this as soon as hosts starts - Ambari would try and start those services. Let me know if you have any more questions on this.
... View more
08-29-2018
10:40 PM
In Ambari when try to upgrade to HDP/HDF versions - after adding a new repo version, some times it might show disabled "install packages" button. There can be multiple reasons for this. 1. Check to stack_id which stack_id clusters table is pointing to. it should be pointed to correct current stack_id if not correct with update query 2. check clusterstate table - it should be pointing to correct current stack_id if not correct with update query 3. How to find my correct current stack_id run "select stack_id, version from repo_version" query - check the stack_id for your current hdp/hdf version. 4. so step1 and step2 should be pointing to stack_id returned in step3. There can be many other reasons like not having "upgrades" folder in stack folder or some other.
... View more
Labels:
08-28-2018
04:28 PM
@Bhushan Kandalkar then you can try above recommendations and see if that helps
... View more
08-28-2018
04:11 PM
@Michael Bronson can you please select correct answer and end this thread?
... View more
08-27-2018
05:10 PM
@Michael Bronson Below config would work fine. not sure DailyRollingFileAppender would work fine in case of size based rolling. # Audit logging for ResourceManager
rm.audit.logger=${hadoop.root.logger}
log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger=${rm.audit.logger}
log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger=false
log4j.appender.RMAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.RMAUDIT.File=${yarn.log.dir}/rm-audit.log
log4j.appender.RMAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RMAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.RMAUDIT.DatePattern=.yyyy-MM-dd
log4j.appender.RMAUDIT.MaxFileSize=1KB
log4j.appender.RMAUDIT.MaxBackupIndex=2
... View more
08-27-2018
04:40 PM
@Bhushan Kandalkar What version HDP are you using? By default 2-WAY SSL is enabled between Hive and Ranger - so server is expecting client certificate as part of handshake and it is failing. I have an article at https://community.hortonworks.com/articles/68150/configuring-ranger-ranger-hdfs-plugin-for-ssl-with.html - please follow and let me know. May be you can try setting below configs ranger.service.https.attrib.clientAuth=false ranger.service.https.attrib.client.auth=false
... View more
08-27-2018
04:33 PM
@Michael Bronson Did you tried modifying below configs in Ambari? yarn_rm_summary_log_max_backup_size
yarn_rm_summary_log_number_of_backup_files
... View more