Member since
06-20-2016
308
Posts
103
Kudos Received
29
Solutions
08-22-2018
09:31 PM
With Ambari 2.6.0 or 2.5.x version if you try to use 2.6.3 ( http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0/HDP-2.6.4.0-91.xml) or 2.6.4 -) http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0/HDP-2.6.3.0-235.xml ) VDF files, it fails to load/parse the files - would see below error in the logs.f "like An internal system exception occurred: Could not load url from http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0/HDP-2.6.4.0-91.xml. null" Those VDF files has GPL tag URLs and these ambari versions can't understand it - hence fails with the above error. You can consider Upgrading the Ambari to 2.6.1 or later versions. If you want to continue using 2.6.0 or earloer Ambari versions then you may have to create private repo and download GPL binaries and work without GPL tags in VDF file. you can contact Hortonworks support for any further help.
... View more
Labels:
08-15-2018
06:25 PM
2 Kudos
Below properties can be used to configure the log4j settings for size based compressed backup. Here is an example for Hive service. log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFA=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.DRFA.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
log4j.appender.DRFA.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.DRFA.rollingPolicy.ActiveFileName =${hive.log.dir}/${hive.log.file}.log
log4j.appender.DRFA.rollingPolicy.FileNamePattern=${hive.log.dir}/${hive.log.file}-.%i.log.gz
log4j.appender.DRFA.triggeringPolicy.MaxFileSize=10000
log4j.appender.DRFA.rollingPolicy.maxIndex=10
For some reason maxfilesize is not working if used like 1MB like that - so use live above. Note: please check console log (.out or .err) for any WARNINGS if does not work
... View more
Labels:
08-14-2018
07:05 PM
Due to AMBARI-24283 BUG some times there would be warnings like below, 2018-08-09 09:38:27,666 WARN - You have config groups present in the database with no corresponding service found, [(ConfigGroup, Service) => ( HDFS_DL111, null ), ( YARN_2222, null )]. Run --auto-fix-database to fix this automatically. Possible root cause: In some of the Ambari 2.5.x/2.6.x versions, service DELETE does not delete config groups cleanly and it shows the WARNING while starting ambari-server. To fix above WARNINGS below steps can be followed. 1. Take Ambari database backup. 2. Run below queries delete from confgroupclusterconfigmapping where config_group_id in (select group_id from configgroup where group_name in ( 'HDFS_DL111', 'YARN_2222')); delete from configgrouphostmapping where config_group_id in (select group_id from configgroup where group_name in ( 'HDFS_DL111', 'YARN_2222'));
delete from configgroup where group_name in ( 'HDFS_DL111', 'YARN_2222');
... View more
Labels:
08-09-2018
08:37 PM
2 Kudos
Pre-requisite: 1. Setup Ambari with LDAP and Sync. 2. Setup Knox and point to same LDAP as Ambari server. Enable SSO for Ambari: 1. Get the Knox public cert by running below. openssl s_client -connect KNOXHOST:8443 <<<'' | openssl x509 -out /tmp/knox.crt 2. Run "ambari-server setup-sso" 3. "provider URL": Enter https://<hostname>:8443/gateway/knoxsso/api/v1/websso 4. "Public Certificate pem" : Provide step1 cert file content without BEGIN/END blocks. -----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
Note: Make sure your /etc/ambari-server/conf/jwt-cert.pem file should have only one BEGIN/END 5. You can select default for rest of the configs. 6. Re-start Ambari server: Knox Configurations 1. If Ambari and Knox is in different host then Whitelist Ambari URL.
In Advanced knoxsso-topology modify below config for whitelisting all (or you can write regex for specific) <param>
<name>knoxsso.redirect.whitelist.regex</name>
<value>.*</value>
</param>
2. Re-start Knox server. Now try accessing Ambari using http://HOSTNAME/IP:PORT/ 1. It should re-direct to the Knox page 2. Enter the username/password and submit 3. It will take back to Ambari page and logged in. For any issues refer /var/log/knox/gateway.log and /var/log/ambari-server/ambari-server.log files to get some clue on failures.
... View more
Labels:
08-07-2018
06:11 PM
2 Kudos
This is applicable for CentOS7/RHEL6 - for CentOS7/RHEL7 please follow https://community.hortonworks.com/articles/188269/javapython-updates-and-ambari-agent-tls-settings.html Upgrading to jdk1.8.0_171 version disables some of the TLSv1_1 TLSv1 protocols and algorithms. With this only option is using TLS1_2 version but CentOS6/RHEL6 uses Python 2.6 and it does not support TLS1_2. Agent-server communication would fail with below error. WARNING 2018-04-24 16:35:10,989 NetUtil.py:124 - Server at https://***.***.***.***:8440 is not reachable, sleeping for 10 seconds...INFO 2018-04-24 16:35:20,990 NetUtil.py:70 - Connecting to https://***.***.***.***:8440/caERROR 2018-04-24 16:35:20,991 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)ERROR 2018-04-24 16:35:20,991 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
To workaround this problem you can tweak /usr/jdk64/jdk1.8.0_112/jre/lib/security/java.security file in Ambari server host to enable some of the algorithms. From: jdk.tls.disabledAlgorithms=SSLv3, RC4, MD5withRSA, DH keySize < 1024, \
EC keySize < 224, DES40_CBC, RC4_40, 3DES_EDE_CBC To: jdk.tls.disabledAlgorithms=SSLv3, RC4, MD5withRSA, DH keySize < 1024, \
EC keySize < 224, DES40_CBC, RC4_40 Please note that this is just a temporary workaround and it is recommended to Upgrade the OS version to use TLS1_2
... View more
Labels:
06-16-2017
07:04 AM
@Anitha R
sorry for late reply - am not available currently. these steps are only for CA signed certs at server side and client certs are generated dynamically.
... View more
05-12-2017
09:11 PM
@Syed Jawad Gilani it is difficult to tell the reason by seeing above error message. but are you generating keystore files from the latest Java and trying to use in older java? please check that.
... View more
05-11-2017
11:25 PM
@Syed Jawad Gilani Do you still have this issues? sorry - i did not check your questions earlier.
... View more
04-27-2017
06:07 PM
2 Kudos
If cluster size is large and it is more than 1 year then Ambari gets slows down bit - so it is recommended to purge historical operational data. Note: You would loose upgrade history as well - Ambari does not use the upgrade history in any way. Note: (Updated in August 2017) :From Ambari 2.5.x there is a product utility to do the clean up - "db-cleanup" Here is list of queries to delete more than 1 month operational data. please do this with caution. Take a database backup before attempting to perform this. CREATE TEMPORARY TABLE IF NOT EXISTS tmp_request_id AS
SELECT MAX(request_id) AS request_id FROM request WHERE create_time <= (SELECT (UNIX_TIMESTAMP(NOW()) - 2678400) * 1000 as epoch_1_month_ago_times_1000);
CREATE TEMPORARY TABLE IF NOT EXISTS tmp_task_id AS
SELECT MAX(task_id) AS task_id FROM host_role_command WHERE request_id <= (SELECT request_id FROM tmp_request_id);
CREATE TEMPORARY TABLE IF NOT EXISTS tmp_upgrade_ids AS
SELECT upgrade_id FROM upgrade WHERE request_id <= (SELECT request_id FROM tmp_request_id);
DELETE FROM execution_command WHERE task_id <= (SELECT task_id FROM tmp_task_id);
DELETE FROM host_role_command WHERE task_id <= (SELECT task_id FROM tmp_task_id);
DELETE FROM role_success_criteria WHERE request_id <= (SELECT request_id FROM tmp_request_id);
DELETE FROM stage WHERE request_id <= (SELECT request_id FROM tmp_request_id);
DELETE FROM topology_logical_task;
DELETE FROM requestresourcefilter WHERE request_id <= (SELECT request_id FROM tmp_request_id);
DELETE FROM requestoperationlevel WHERE request_id <= (SELECT request_id FROM tmp_request_id);
DELETE FROM upgrade_item WHERE upgrade_group_id IN (SELECT upgrade_group_id FROM upgrade_group W
HERE upgrade_id IN (SELECT upgrade_id FROM tmp_upgrade_ids));
DELETE FROM upgrade_group WHERE upgrade_id IN (SELECT upgrade_id FROM tmp_upgrade_ids);
DELETE FROM upgrade WHERE request_id <= (SELECT request_id FROM tmp_request_id);
DELETE FROM request WHERE request_id <= (SELECT request_id FROM tmp_request_id);
DELETE FROM topology_host_task;
DELETE FROM topology_host_request;
DELETE FROM topology_logical_request;
DELETE FROM topology_host_info;
DELETE FROM topology_hostgroup;
DELETE FROM topology_request;
DROP TABLE tmp_upgrade_ids;
DROP TABLE tmp_task_id;
DROP TABLE tmp_request_id;
Note: These queries works on MySQL database.
... View more
Labels:
04-20-2017
12:59 PM
1 Kudo
Some times though moving master wizard is completed still UI shows the message as "Move Master Wizard In Progress". To fix this problem you can follow below steps. Step-1) curl -u admin:admin -i -H 'X-Requested-By: ambari' -X POST -d '{"wizard-data":"{\"userName\":\"admin\",\"controllerName\":\"reassignMasterController\"}"}'http://ambari.example.com:8080/api/v1/persist
Step-2). Login to the Ambari UI with "admin" user. Step-3). Create a new "local" user with some name like "admin1" (any password like "admin1") Step-4). Open another browser and login with the "admin1" user. User might see that the "Add Service Wizard in Progress" link is blinking at the top. Step-5). Now from the earlier browser where we logged in with "admin" user. We will need to click on the "Actions" => "Add Service" Step-6). Now if the "admin" user does nothing. Or Shuts down his/her laptop. Still the other admin users (like "admin1") will not be able to perform any action in the ambari UI and will keep seeing the "Add Service Wizard in Progress" link is blinking at the top. Note: *.This is tried and tested in Ambari-2.4.2 version.
... View more
Labels: