Member since
09-29-2015
63
Posts
107
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2263 | 05-04-2017 12:32 AM | |
3051 | 12-09-2016 07:58 PM | |
7496 | 12-09-2016 07:53 PM | |
1101 | 06-08-2016 09:26 PM | |
1723 | 06-08-2016 09:01 PM |
12-28-2015
06:57 PM
Today, Ambari calls "hive.cmd --service schematool -info ...", and if that fails, it assumes that the schema isn't created, so it then calls, "hive.cmd --service schematool -initSchema ..."
This works for HDP 2.0.6 and higher. See https://github.com/apache/ambari/blob/trunk/ambari...
Several internal Jiras have tracked this issue, BUG-14330, BUG-39736, BUG-17404
... View more
12-28-2015
05:21 PM
Hassan, can you provide the output after running this? select u.upgrade_id, u.from_version, u.to_version, u.direction, u.upgrade_type, s.stage_id, s.supports_auto_skip_failure, SUBSTR(s.request_context, 0, 40) FROM upgrade u JOIN stage s ON u.request_id = s.request_id ORDER BY s.stage_id ASC; Also, can you run a describe on the stage table? Here's what my table looks like, ambari=> \d stage
Table "ambari.stage"
Column | Type | Modifiers
----------------------------+------------------------+--------------------
stage_id | bigint | not null
request_id | bigint | not null
cluster_id | bigint | not null
skippable | smallint | not null default 0
supports_auto_skip_failure | smallint | not null default 0
log_info | character varying(255) | not null
request_context | character varying(255) |
cluster_host_info | bytea | not null
command_params | bytea |
host_params | bytea |
Indexes:
"stage_pkey" PRIMARY KEY, btree (stage_id, request_id)
Foreign-key constraints:
"fk_stage_request_id" FOREIGN KEY (request_id) REFERENCES request(request_id)
Referenced by:
TABLE "host_role_command" CONSTRAINT "fk_host_role_command_stage_id" FOREIGN KEY (stage_id, request_id) REFERENCES stage(stage_id, request_id)
TABLE "role_success_criteria" CONSTRAINT "role_success_criteria_stage_id" FOREIGN KEY (stage_id, request_id) REFERENCES stage(stage_id, request_id)
... View more
12-17-2015
02:41 AM
32 Kudos
One of the most gargantuan tasks of any cluster administrator is upgrading the bits since it is a tedious, risky, and inherently complex process that can take days. Ambari comes to the rescue with two features: Rolling Upgrade (RU), and Express Upgrade (EU), aimed upgrading the HDP cluster with a couple of clicks. For starters, Rolling Upgrade was first released in Ambari 2.0.0 (March 2015) and the latest incarnation (as of October 2015) in Ambari 2.1.2 provides added robustness and a couple of goodies. Express Upgrade is set to be released in Ambari 2.2.0 (ETA is Dec 16, 2015). So how does it work? What are the gotcha's? What do I need to know?
Overview: Both upgrade mechanisms update the bits and configurations of your cluster. The main difference is that RU upgrades one node at a time, and maintains the cluster and all Hadoop jobs running, while EU stops all services, changes the version and configs, then starts all services in parallel. Therefore, RU is the prime candidate for environments that cannot take downtime, whereas EU is faster since it takes downtime. If your cluster is large, on the order of 500+ nodes and you must finish upgrade in a weekend, then EU is the clear choice.
Pre-Reqs: Bits:
In both cases, the user must first register a new repo, and install the bits side-by-side. For example, /usr/hdp/2.2.4.2-2 (current version)
/usr/hdp/2.3.0.0-2557 (version that will upgrade to) The good news is that this can be done ahead of time. Ambari will only install the bits for the services needed on that host, this ensures that we save on disk-space since the full HDP stack can take up to 2.5 GB. Pre-Checks:
It is wise for users to make sure that they pass the Pre-Checks before attempting to start the upgrade. The pre-checks include: All hosts have the repo version installed All components are installed All hosts are heartbeating All hosts in maintenance state do not have any master components No services are in maintenance mode All Services are up Hive is configured for dynamic discovery For RU, client retry enabled for HDFS, Hive, and Oozie For RU, YARN has work-preserving restart enabled For RU, MapReduce2 History server has state preserving mode enabled For RU, MapReduce jobs reference hadoop libraries from the distributed cache For RU, Tez jobs reference hadoop libraries from the distributed cache For RU, Hive has at least 2 MetaStores For RU, NameNode High Availability is required, and must use a dfs nameservice If the user adds any services or hosts after already installing the bits for a repo, then they must redistribute the bits since Ambari will mark that repo as "out_of_sync".
Orchestration: Rolling Upgrade orchestrates the services one at a time, and restarts one component at a time. When a component is restarted, it is stopped on the old version, then started on the newer version. For HDP, this is done by calling hdp-select set $comp $version to set the symlink of the binary, and if on HDP 2.3 or higher, then also calling conf-select set-conf-dir $package $stack_version $conf_version to change the symlink for the configuration. The binary symlinks are controlled by /usr/hdp/current/$comp-name/ -> /usr/hdp/$version/$comp
The confs are controlled by two symlinks,
/etc/$comp/conf -> /usr/hdp/$version/$comp/conf -> /etc/$comp/$version/0 RU restarts the services from the bottom-up, i.e.,
ZK Ranger Core Masters: HDFS, MR2, YARN, HBASE Core Slaves: DataNode, RegionServer, NodeManager on each host Auxiliary & Clients: Hive, Spark, Oozie, Falcon, Clients, Kafka, Knox, Storm, Slider, Flume, Accumulo Before starting, Ambari will prompt the user to take backups of the database, and will automatically snapshot the HDFS namespace. Throughout the process, Ambari will orchestrate Service Checks after critical points. At the end, Ambari will finalize the rolling upgrade, and save the state in its database. Express Upgrade has a slightly different orchestration; it stops all services on the current stack from the top-down, then it changes to the new stack and applies configs, then it starts services from the bottom-up. When both stopping and starting services, it will do so in parallel.
Furthermore, because EU takes downtime, it does not require NameNode High Availability.
Merging Configurations: When upgrading across major versions, e.g. HDP 2.2->2.3, Ambari will have to merge configs. E.g., HDP 2.2 default configs = base
HDP 2.3 default configs = desired Ambari has rules for how to add, rename, transform, and delete configs from the base stack to the desired stack. Any properties that the user modified in the base stack will be persisted, even if the new stack has a different value. Error Handling: All operations are retry-able in the case of an error and we've made a considerable effort to ensure all ops are idempotent. Moreover, non-critical steps (such as Service Checks and the higher-level services) can always be skipped, since they can always be fixed right before finalizing. In Ambari 2.1.2, Ambari introduced two values that allow the Upgrade Packs to automatically skip Service Check or Component Failures. See AMBARI-13032. In Ambari 2.1.3, these error-handling options can be controlled at run-time, and it makes it easier to ignore errors until the end. See AMBARI-13018.
Further, Ambari 2.1.3 also allows suppressing manual tasks so they run silently. See AMBARI-13457. Gotcha's, Tips, and Tricks: 1. Checkout this presentation: RU Tips, Tricks, Hacks 2. Always move to the latest version of Ambari. 3. Backup the Ambari database before starting UpgradeIf you run into any problems attempting to "Save the Cluster State", this is likely because some hosts/components are still on the older version. To find out these components, run SELECT h.host_name, hcs.service_name, hcs.component_name, hcs.version FROM hostcomponentstate hcs JOIN hosts h ON hcs.host_id = h.host_id ORDER BY hcs.version, hcs.service_name, hcs.component_name, h.host_name; To fix the components, on each host, run this for the applicable components, hdp-select versions
hdp-select set $comp_name $desired_version and restart the components (note: you may have to do this manually, or by enabling the /#/experimental flag) 4. If you run into any problems during Upgrade, try RU Magician! It's a python script that checks the database and can perform some updates for you, RU Magician 5. For advanced users, you can still modify properties by navigating to http://$server:8080/#/experimental and enabling "opsDuringRollingUpgrade" 6. If planning to upgrade Knox to HDP 2.3.2 or higher, must first upgrade Ambari to 2.1.2 7. If patching tez.lib.uris, then must reset the path to the original value before starting the Upgrade; otherwise, Ambari will persist the value of the patched jar, which will not work in the new version. 8. If performing a manual stack upgrade, don't forget to call this to save the new version as "current" ambari-server set-current --cluster-name=$CLUSTERNAME --version-display-name=$VERSION_NAME In Ambari 2.2, you can now force the finalization, thereby skipping any errors.
al stack upgrade, don't In Ambari 2.1.3, you can force the finalization by running the command above with "--force". See AMBARI-13591 Example of APIs: Run the pre-checks: curl -u $admin:$password -X POST -H 'X-Requested-By:admin' <a href="http://$server:8080/api/v1/clusters/$name/rolling...">http://$server:8080/api/v1/clusters/$name/rolling...</a> Start the upgrade: curl -u $admin:$password -X POST -H 'X-Requested-By:admin' <a href="http://$server:8080/api/v1/clusters/$name/upgrade...">http://$server:8080/api/v1/clusters/$name/upgrade...</a> -d '{"Upgrade":{"repository_version":"2.3.0.0-2557", "type":"ROLLING"}}' Check the status: curl -u $admin:$password -X GET -H 'X-Requested-By:admin' <a href="http://$server:8080/api/v1/clusters/c1/upgrades">http://$server:8080/api/v1/clusters/c1/upgrades</a> Debugging & Logging: If the the upgrade fails to Finalize, find out which hosts and components are still not on the newer version. -- Check the repo version state
SELECT rv.version, cv.state FROM repo_version rv
JOIN cluster_version cv ON rv.repo_version_id = cv.repo_version_id
ORDER BY rv.version ASC;
-- Check the hosts
SELECT rv.version, h.host_name, hv.state
FROM repo_version rv
JOIN host_version hv ON rv.repo_version_id = hv.repo_version_id
JOIN hosts h ON hv.host_id = h.host_id
ORDER BY rv.version ASC, h.host_name;
-- Find the components on the wrong version,
-- call "hdp-select set <comp> <version>", check the config symlinks, and restart them manually
SELECT hcs.service_name, hcs.component_name, h.host_name, hcs.version
FROM hostcomponentstate hcs
JOIN hosts h ON hcs.host_id = h.host_id
ORDER BY hcs.version ASC, hcs.service_name, hcs.component_name, h.host_name;
Postgres: SELECT u.upgrade_id, u.direction, substr(g.group_title, 0, 40), substr(i.item_text, 0, 80), substr(hrc.status, 0, 40), hrc.task_id, h.host_name, hrc.output_log, hrc.error_log
FROM upgrade_group g JOIN upgrade u ON g.upgrade_id = u.upgrade_id
JOIN upgrade_item i ON i.upgrade_group_id = g.upgrade_group_id
JOIN host_role_command hrc ON hrc.stage_id = i.stage_id AND hrc.request_id = u.request_id
JOIN hosts h ON hrc.host_id = h.host_id
ORDER BY u.upgrade_id, g.upgrade_group_id, i.stage_id;
MySQL: SELECT u.upgrade_id, u.direction, LEFT(g.group_title, 40), LEFT(i.item_text, 80), LEFT(hrc.status, 40), hrc.task_id, h.host_name, hrc.output_log, hrc.error_logFROM upgrade_group AS g JOIN upgrade AS u ON g.upgrade_id = u.upgrade_idJOIN upgrade_item AS i ON i.upgrade_group_id = g.upgrade_group_idJOIN host_role_command AS hrc ON hrc.stage_id = i.stage_id AND hrc.request_id = u.request_idJOIN hosts AS h ON hrc.host_id = h.host_id ORDER BY u.upgrade_id, g.upgrade_group_id, i.stage_id;
If you have any questions, feel free to email user@ambari.apache.org
... View more
Labels:
12-11-2015
09:01 PM
If you still run into issues, most likely one of the hosts still has components on an older version.
You can triage these by running some SQL queries, found here https://community.hortonworks.com/articles/2473/rolling-upgrade-express-upgrade-in-ambari.html For the problematic hosts, make sure to call hdp-select set <comp> <version>, and then restart them (using Ambari if you can; if they get queued, there are ways to "pause" the Rolling Upgrade so that other actions can still run.)
In the worst case, you can always update the database manually, or use RU Magician, as described in the link above. -- Check the repo version state SELECT rv.version, cv.state
FROM repo_version rv
JOIN cluster_version cv ON rv.repo_version_id = cv.repo_version_id
ORDER BY rv.version ASC;
-- Check the hosts SELECT rv.version, h.host_name, hv.state
FROM repo_version rv
JOIN host_version hv ON rv.repo_version_id = hv.repo_version_id
JOIN hosts h ON hv.host_id = h.host_id
ORDER BY rv.version ASC, h.host_name;
-- Find the components on the wrong version,
-- call "hdp-select set <comp> <version>", check the config symlinks, and restart them manually
SELECT hcs.service_name, hcs.component_name, h.host_name, hcs.version
FROM hostcomponentstate hcs
JOIN hosts h ON hcs.host_id = h.host_id
ORDER BY hcs.version ASC, hcs.service_name, hcs.component_name, h.host_name;
... View more
12-10-2015
09:59 PM
1 Kudo
Is the host only going to contain DBs, or will it also contain Ambari Server, HiveServer, etc? Spreading it out is wise, but if you're really constrained, the Ambari DB is usually no more than 100 MB.
There's an article on how to optimize the Ambari DB for large clusters (200+ nodes)
http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.1.0/bk_ambari_reference_guide/content/ch_tuning_ambari_performance.html
... View more
12-02-2015
03:00 AM
2 Kudos
The hosts table in the database has a field called rack_info that supports up to 255 chars.
It can be edited from the UI in the Host page for a single host. In order to edit it using the API, you can make a single request to edit it for x number of hosts. E.g., curl -u $username:$password -X PUT -H 'X-Requested-By:admin' http://$server:8080/api/v1/clusters/$clustername/hosts -d '{"RequestInfo":{"context":"Set Rack","query":"Hosts/host_name.in($FQDN_1,$FQDN_2,$FQDN_3)"},"Body":{"Hosts":{"rack_info":"/my_new_rack_value"}}}' Actual call, curl -u admin:admin -X PUT -H 'X-Requested-By:admin' http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/hosts -d '{"RequestInfo":{"context":"Set Rack","query":"Hosts/host_name.in(c6401.ambari.apache.org,c6402.ambari.apache.org,c6403.ambari.apache.org)"},"Body":{"Hosts":{"rack_info":"/my_new_rack_value"}}}'
... View more
11-24-2015
02:17 AM
2 Kudos
On the cluster with the NameNodes, set these properties in HDFS for custom core-site, hadoop.proxyuser.root.groups=* hadoop.proxyuser.root.hosts=*
Navigate to
http://server:8080/api/v1/clusters/$name/configurations?type=hdfs-site click on the last config, and take note of several of the properties On the standalone Ambari Server hosting the Views, create the Files View and configure it as follows,
WebHDFS FileSystem URI: webhdfs://$nameservice:50070 List of NameNodes: Value from dfs.ha.namenodes.$nameservice E.g., "nn1,nn2" NameNode RPC Address: fqdn:8020 (comes from dfs.namenode.rpc-address.$nameservice.$nodename ) NameNdoe HTTP (WebHDFS) Address: fqdn:50070 (comes from dfs.namenode.http-address.$nameservice.$nodename ) Failover Proxy Provider: Value from dfs.client.failover.proxy.provider.$nameservice E.g., "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider" Voila,
... View more
11-24-2015
02:09 AM
1 Kudo
Someone recently asked me how to configure a standalone Ambari Server with the Files View to point correctly to a server running NameNode HA. Whenever active/standby NameNodes switch roles, the Files View can correctly handle the transition.
... View more
Labels:
- Labels:
-
Apache Ambari
10-21-2015
09:56 PM
Ambari 2.1.3 will support Express Upgrade from HDP 2.1 -> 2.3 directly. That feature is similar to Rolling Upgrade, except that the cluster takes downtime. ETA is December.
... View more
10-21-2015
09:51 PM
1 Kudo
This may be done in Ambari 2.2 or 2.3, depending on when the community decides to work on it. https://issues.apache.org/jira/browse/AMBARI-4016
https://issues.apache.org/jira/browse/AMBARI-7896
... View more
- « Previous
- Next »