Member since
10-01-2015
65
Posts
42
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3494 | 01-31-2018 06:29 PM | |
1525 | 09-27-2017 06:32 PM | |
2418 | 03-01-2017 06:26 PM | |
1076 | 01-09-2017 06:42 PM | |
21534 | 07-09-2016 06:28 PM |
01-31-2018
06:29 PM
Can you upgrade Ambari to 2.5.2 or AMS only to 2.5.2? This most likely a known issue: https://issues.apache.org/jira/browse/AMBARI-21640
... View more
09-27-2017
06:32 PM
You should be able to find DEBUG level messages in the individual Hadoop service logs; messages starting with org.apache.hadoop.metrics2.* One config missing is: # default sampling period *.period=10
... View more
05-01-2017
11:04 PM
9 Kudos
Problem summary: Users can end up in a situation where the SQL Engine for all Ambari tables is set to MyISAM instead of InnoDB which is required for Ambari. There are few ways to land up in this situation. Prior to MySQL version 5.5.5 the default engine was MyISAM. It could also be a configured default as a global default in my.cnf which affected Ambari versions before 2.5. AMBARI-18951 addressed this issue by making InnoDB default for new databases (2.5+ Ambari deployments) as well as throwing an explicit error preventing Ambari upgrade if the wrong engine is set in order to avoid landing up in an intermediate non-upgraded state. Following is an sample error that would be encountered on an upgrade path from 2.2.2 to 2.4.2 ERROR [main] DBAccessorImpl:830 - Error executing query: ALTER TABLE hostcomponentdesiredstate ADD CONSTRAINT hstcmpnntdesiredstatecmpnntnme FOREIGN KEY (component_name, service_name, cluster
_id) REFERENCES servicecomponentdesiredstate (component_name, service_name, cluster_id)
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Duplicate key name 'hstcmpnntdesiredstatecmpnntnme' Reason for failure: When MyISAM is chosen as database engine FK create statements doesn't
create any object in DB which describes FK, but creates corresponding
index. This is the reason of failure, the Ambari UpgradeCatalogs check existence of FK which is absent, but there is an index with same name. Also there is an issue with migration. Because there is no information about FK stored in DB, a simple engine change won't fix this problem. Suggested approach: MySQL docs suggest steps for migration from MyISAM to InnoDB by creating a new InnoDB table and doing a select and insert. Although, this sounds doable for 106 tables it needs renaming of all Foreign keys for new tables to prevent collision with existing objects in the database and then again renamed back after dropping old tables since this will be required for smooth upgrades of Ambari in the future. Instead the approach described below takes a more direct but simpler approach and avoids any hand edits of the DDL to make the migration. The recommendation is to create a blank schema using DDL script and insert data using the db dump. Note 1: This has been tried and tested on 2.2.2 Ambari version and upgrading to 2.4.2. Although the approach and steps are generic enough to apply to any 2.X version and upwards. Note 2: If you are in the middle of upgrade and hit this error related to the storage engine, please make sure to revert back to the last backup of the database as well as the ambari bits (yum donwgrade) Steps:
1. Edit /var/lib/ambari-server/Ambari-DDL-MySQL-CREATE.sql from your ambari-server host to add the following lines that will set the appropriate engine on re-creating the database. These should be added right at the beginning of the file. -- Set default_storage_engine to InnoDB
-- storage_engine variable should be used for versions prior to MySQL 5.6
set @version_short = substring_index(@@version, '.', 2);
set @major = cast(substring_index(@version_short, '.', 1) as SIGNED);
set @minor = cast(substring_index(@version_short, '.', -1) as SIGNED);
set @engine_stmt = IF(@major >= 5 AND @minor>=6, 'SET default_storage_engine=INNODB', 'SET storage_engine=INNODB');
prepare statement from @engine_stmt;
execute statement;
DEALLOCATE PREPARE statement; 2. Take a backup of your good working condition non-upgraded Ambari database, lets call it, dbdump.sql 3. Create the INSERT only SQL, lets call it insert_final.sql without the CREATE statements from your dbdump file. (Optional) Purge the dbdump.sql of historical information to make the database dump of manageable size, getting rid of historical items. Use the following cat / grep statement to achieve this: cat dbdump.sql | grep "INSERT INTO" | grep -v "INSERT INTO \`alert" | grep -v "INSERT INTO \`host_role_command\`" | grep -v "INSERT INTO \`execution_command\`" | grep -v "INSERT INTO \`request" | grep -v "INSERT INTO \`stage" | grep -v "INSERT INTO \`topology" | grep -v "INSERT INTO \`blueprint" | grep -v "INSERT INTO \'qrtz" | grep -v "INSERT INTO \`hostgroup" > insert_final.sql (Without purge option use only the first grep statement) 4. Drop Ambari database and Recreate it using the /var/lib/ambari-server/Ambari-DDL-MySQL-CREATE.sql mysql> drop database ambari;
mysql> create database ambari;
mysql> use ambari;
mysql> source /var/lib/ambari-server/Ambari-DDL-MySQL-CREATE.sql 5. Make sure the new storage engine is set to InnoDB SELECT `ENGINE` FROM `information_schema`.`TABLES` WHERE `TABLE_SCHEMA`='ambari'; 6. Add the following statements to the beginning of the insert_final.sql. These will delete the default INSERT(s) coming from the DDL script which would be correctly re-inserted from your db dump later on in the file. Without the deletes you would get duplicate constraint violations. The silencing of constraints allows out of order inserts without constraint violations. SET unique_checks=0;
SET FOREIGN_KEY_CHECKS=0;
DELETE FROM `adminpermission`;
DELETE FROM `adminprincipal`;
DELETE FROM `adminpermission`;
DELETE FROM `adminprincipaltype`;
DELETE FROM `adminprivilege`;
DELETE FROM `adminresource`;
DELETE FROM `adminresourcetype`;
DELETE FROM `viewentity`;
DELETE FROM `metainfo`;
DELETE FROM `users`;
DELETE FROM `ambari_sequences`; 7. Turn on the constraints at the end of the dump file. Append the following at the end of the insert_final.sql file. SET unique_checks=1;
SET FOREIGN_KEY_CHECKS=1; 8. Execute the insert_final.sql on the ambari database. mysql> use ambari;
mysql> source insert_final.sql (These instruction should not fail with any error messages. If you get primary key constraint violations, it is in all likelyhood some seed data inserted by the DDL script. Just add the DELETE statement at the beginning of the file.) 9. With no errors on step 8, start the Ambari server and verify everything looks good and are no SQL errors in the server logs. If step 8 fails, make the minor adjustments and just rinse and repeat using Step 4 and Step 8 until there are no errors. 10. Proceed with Ambari upgrade to the new desired version.
... View more
Labels:
03-09-2017
09:53 PM
hdp242-s1.openstacklocal: Doesn't resolve for me. If the metric is not in metadata that means that HBase has never sent this metric to AMS. Did you perform a scan / run Ambari Smoke test to see if any data is sent? Could be this metric is not sent by the HBase version you are using?
... View more
03-01-2017
06:26 PM
2 Kudos
Are you able to see other RegionServer metrics on this dashboard? I checked out my test cluster with HDP 2.6 and Ambari 2.5 and I am able to see them. One way to verify if these metrics were ever sent to AMS is by making a call to AMS metadata API in your browser: http://<ams-collector-host>:6188/ws/v1/timeline/metrics/metadata You can look for metrics name in the response, example: regionserver.Server.ScanTime
... View more
02-10-2017
03:53 PM
In embedded mode we combine region server and hbase heapsize since there is only 1 daemon running.
Xmn settings:
regionserver_xmn_size
hbase_master_xmn_size
JDK8 perm setting is unnecessary since it is offheap, so effectively the setting itself doesn't matter.
... View more
02-10-2017
05:56 AM
XMN should be 15 % of Xmx, suggest setting this value to at 1 GB. maxperm does not need to be more than 128M
... View more
02-09-2017
06:06 PM
Comment from @Jay SenSharma regarding Region metrics is also important and applicable. Note: Additionally make sure Xmn settings = 15 % of Xmx in ams-env and ams-hbase-env
... View more
02-09-2017
05:55 PM
If this is a production cluster switch to distributed mode will make use of cluster Zookeeper which will make the system a lot more stable. Embedded mode works perfectly fine for cluster size of 40 nodes albeit, memory and disk are not heavily contended. AMS HBase will write to 1 disk and talk to the embedded zookeeper so starightforward recommendations without looking at the full logs and configs and without changing the mode: ams-env :: metrics_collector_heapsize = 1024 ams-hbase-env :: hbase_regionserver_heapsize = 4096 Make sure hbase.rootdir and hbase.tmp.dir are not pointing to the same location. Key is to put hbase.rootdir on a non-contended disk. If you switch ti distributed mode the disk settings do not matter: https://cwiki.apache.org/confluence/display/AMBARI/AMS+-+distributed+mode
... View more
02-07-2017
07:11 PM
Please also look at: https://cwiki.apache.org/confluence/display/AMBARI/Troubleshooting+Guide How big is your cluster?
... View more