Member since
07-30-2019
53
Posts
136
Kudos Received
16
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 11456 | 01-30-2017 05:05 PM | |
| 6751 | 01-13-2017 03:46 PM | |
| 3237 | 01-09-2017 05:36 PM | |
| 2143 | 01-09-2017 05:29 PM | |
| 1543 | 10-07-2016 03:34 PM |
05-18-2018
01:26 PM
I have been trying your suggestion, added the host via the above API call and get the following: 18 May 2018 14:18:37,613 INFO [qtp-ambari-agent-326] TopologyManager:637 - TopologyManager.onHostRegistered: Entering
18 May 2018 14:18:37,613 INFO [qtp-ambari-agent-326] TopologyManager:698 - TopologyManager: Queueing available host node5
18 May 2018 14:19:37,959 WARN [alert-event-bus-1] AlertReceivedListener:497 - Unable to process alert ambari_agent_disk_usage for cluster idp and host node5 because the host is not a part of the cluster.
18 May 2018 14:20:06,779 INFO [ambari-client-thread-74] TopologyManager:485 - TopologyManager.scaleHosts: Entering
18 May 2018 14:20:06,779 INFO [ambari-client-thread-74] ClusterTopologyImpl:158 - ClusterTopologyImpl.addHostTopology: added host = node5 to host group = host_group_1
18 May 2018 14:20:06,780 INFO [ambari-client-thread-74] HostRequest:205 - Skipping Start task creation since provision action = INSTALL_ONLY
18 May 2018 14:20:06,782 INFO [ambari-client-thread-74] HostRequest:244 - Skipping create of START task for KAFKA_BROKER on PENDING HOST ASSIGNMENT : HOSTGROUP=host_group_1.
18 May 2018 14:20:06,783 INFO [ambari-client-thread-74] HostRequest:244 - Skipping create of START task for ZOOKEEPER_SERVER on PENDING HOST ASSIGNMENT : HOSTGROUP=host_group_1.
18 May 2018 14:20:06,784 INFO [ambari-client-thread-74] HostRequest:244 - Skipping create of START task for METRICS_MONITOR on PENDING HOST ASSIGNMENT : HOSTGROUP=host_group_1.
18 May 2018 14:20:06,784 INFO [ambari-client-thread-74] HostRequest:244 - Skipping create of START task for NIFI_MASTER on PENDING HOST ASSIGNMENT : HOSTGROUP=host_group_1.
18 May 2018 14:20:06,785 INFO [ambari-client-thread-74] HostRequest:244 - Skipping create of START task for LOGSEARCH_LOGFEEDER on PENDING HOST ASSIGNMENT : HOSTGROUP=host_group_1.
18 May 2018 14:20:06,786 INFO [ambari-client-thread-74] HostRequest:244 - Skipping create of START task for ZOOKEEPER_CLIENT on PENDING HOST ASSIGNMENT : HOSTGROUP=host_group_1.
18 May 2018 14:20:06,786 INFO [ambari-client-thread-74] HostRequest:99 - HostRequest: Created request for host: node5
18 May 2018 14:20:06,786 INFO [ambari-client-thread-74] LogicalRequest:437 - LogicalRequest.createHostRequests: all host requests size 1 , outstanding requests size = 0
18 May 2018 14:20:06,790 INFO [ambari-client-thread-74] TopologyManager:923 - TopologyManager.createLogicalRequest: created LogicalRequest with ID = 52 and completed persistence of this request.
18 May 2018 14:20:06,793 INFO [ambari-client-thread-74] TopologyManager:845 - TopologyManager.processRequest: Entering
18 May 2018 14:20:06,794 INFO [ambari-client-thread-74] TopologyManager:863 - TopologyManager.processRequest: host name = node5 is mapped to LogicalRequest ID = 52 and will be removed from the reserved hosts.
18 May 2018 14:20:06,794 INFO [ambari-client-thread-74] TopologyManager:876 - TopologyManager.processRequest: offering host name = node5 to LogicalRequest ID = 52
18 May 2018 14:20:06,794 INFO [ambari-client-thread-74] LogicalRequest:101 - LogicalRequest.offer: attempting to match a request to a request for a reserved host to hostname = node5
18 May 2018 14:20:06,794 INFO [ambari-client-thread-74] LogicalRequest:110 - LogicalRequest.offer: request mapping ACCEPTED for host = node5
18 May 2018 14:20:06,794 INFO [ambari-client-thread-74] LogicalRequest:113 - LogicalRequest.offer returning response, reservedHost list size = 0
18 May 2018 14:20:06,795 INFO [ambari-client-thread-74] TopologyManager:886 - TopologyManager.processRequest: host name = node5 was ACCEPTED by LogicalRequest ID = 52 , host has been removed from available hosts.
18 May 2018 14:20:06,795 INFO [ambari-client-thread-74] ClusterTopologyImpl:158 - ClusterTopologyImpl.addHostTopology: added host = node5 to host group = host_group_1
18 May 2018 14:20:06,797 INFO [ambari-client-thread-74] TopologyManager:963 - TopologyManager.processAcceptedHostOffer: queue tasks for host = node5 which responded ACCEPTED
18 May 2018 14:20:06,797 INFO [ambari-client-thread-74] TopologyManager:988 - TopologyManager.processAcceptedHostOffer: queueing tasks for host = node5
18 May 2018 14:20:06,797 INFO [ambari-client-thread-74] TopologyManager:904 - TopologyManager.processRequest: not all required hosts have been matched, so adding LogicalRequest ID = 52 to outstanding requests
What can I do to move the request from PENDING to actually started? Best
... View more
06-17-2016
01:15 PM
@Philippe Back... From all krb5.conf files on all nodes in the hadoop cluster.
... View more
03-09-2018
04:52 PM
@David Streever Hi David, I am trying to enable kerberos on a cluster running Ambari 2.6.0 with HDP 2.6.3 and IPA 4.5.2 I want to keep the cluster name in the Ambari USER names.... When I use the above procedure I run into problems when the USER principals are created and subsequently when the keytabs are generated. It looks like the Ambari wizard does not change the local user name (%5) to also have the lower case cluster name... so that when the USER principals are created they are created as local user name without the cluster name. Then when running gen_keytabs.sh I get the following: Failed to parse result: PrincipalName not found. Retrying with pre-4.0 keytab retrieval method... Failed to parse result: PrincipalName not found. Failed to get keytab! Failed to get keytab chown: cannot access
‘/etc/security/keytabs/smokeuser.headless.keytab’: No such file or directory chmod: cannot access
‘/etc/security/keytabs/smokeuser.headless.keytab’: No such file or directory Failed to parse result: PrincipalName not found. I can see why this happens but I am unsure as to what the USER name should be... in other words do I edit the kerberos.csv so that local username matches the new Kerberos principal? Do the host's local usernames that are local to each host in the cluster need to match the Kerberos USER Princ names? I have tried with and without cluster name and I still run into errors during the Start and Test phase having to do with credentials not working. I am hoping once I figure this all out I can create a new HOWTO for IPA-Manual Princ process. FYI I was unable to get the Ambari Automatic Kerberization to work using the FreeIPA experimental feature before moving on to attempting your manual process. Any insights or assistance is much appreciated.
... View more
06-17-2016
08:42 PM
Thank you @gkesavan, here is the entry which worked for me: <mirror>
<id>hw_central</id>
<name>Hortonworks Mirror of Central</name>
<url>http://repo.hortonworks.com/content/groups/public/</url>
<mirrorOf>central</mirrorOf>
</mirror>
... View more
09-28-2015
09:45 PM
3 Kudos
I would highly recommend against re-using another ZK quorum for this purpose. The risk of the network partitioning is too high and the benefits aren't clear. As David mentions above, NN doesn't put high load on ZK for leader election. Have each NN HA pair (cluster for that matter) talk to their own ZK quorum within the same network segment.
... View more
10-28-2015
03:16 AM
For ambari-agent, use the $AMBARI_AGENT_LOG_DIR environment variable. There is a typo in /usr/sbin/ambari-agent shell script though: Just change $AMBARI_LOG_DIR to $AMBARI_AGENT_LOG_DIR in /usr/sbin/ambari-agent. Now set $AMBARI_AGENT_LOG_DIR to the desired location. The bug has been fixed in trunk: https://reviews.apache.org/r/37990/ For ambari-server, there is no such environment variable. I'll file a bug for this and have it triaged.
... View more
09-30-2015
01:11 AM
The question is why is "nn" user trying to access data?
... View more
11-24-2015
12:35 AM
1 Kudo
Regarding "you won't see any new mount points": It's important to distinguish between the NFS Gateway service and the NFS client, even though they can both be on the same machine. NFS services export mountpoints, ie make them available for clients to mount. NFS clients mount them and make use of them as filesystems. It is true that for some applications, it would be convenient to have the NFS mountpoint mounted on the cluster nodes, but this is a Client functionality, not part of Gateway setup. And for many other applications, it is more important to have the NFS mountpoint available for use by other hosts outside the Hadoop cluster -- which can't be managed by Ambari.
... View more
09-25-2015
09:16 PM
One good thing to show in the tutorial would be how this lets you manage multi-tenancy for Spark (currently only available via Spark on YARN) https://github.com/hortonworks-gallery/ambari-zeppelin-service/blob/master/README.md#zeppelin-yarn-integration
... View more
09-24-2015
07:59 PM
1 Kudo
Applied this recently as well with MySql 5.5 instance with HA (Tungsten). Haven't seen the issue on 5.6 basic install.
... View more
- « Previous
-
- 1
- 2
- Next »