Member since
06-06-2019
81
Posts
58
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1417 | 10-04-2019 07:24 AM | |
1751 | 12-12-2016 03:07 PM | |
3855 | 12-07-2016 03:41 PM | |
3966 | 07-12-2016 02:49 PM | |
1374 | 03-04-2016 02:35 PM |
11-17-2015
08:57 PM
I want to avoid using the REST API to force the status. And even though the service is marked as down, there is no option to run a service check.
... View more
Labels:
- Labels:
-
Apache Ambari
11-08-2015
04:19 AM
Be cautious. The assumption with all techniques using the Ambari REST API to auto start the cluster is the notion that the ambari-server process is running and also reachable via the network. I have seen cases in AWS where the network connection is not yet available to the Ambari server host, resulting in the REST calls failing.
... View more
11-06-2015
06:02 PM
3 Kudos
Is Kerberos also in play? The docs page here gives an example for connecting with the HBase service principal. Have you tried a connection URL like? jdbc:phoenix:<Zookeeper_host_name>:<port_number>:<secured_Zookeeper_node>:<user_name>
... View more
10-29-2015
08:29 PM
2 Kudos
The commands being ran are below. Both fail. [root@host1 ~]# sudo -u hdfs /usr/bin/kinit -k -t /etc/security/keytabs/hdfs.headless.keytab hdfs/host1.prod.myclient.com@CORP.DS.MYCLIENT.COM
kinit: Keytab contains no suitable keys for hdfs/host1.prod.myclient.com@CORP.DS.MYCLIENT.COM while getting initial credentials and [user1@host2.prod /var/www/html]$ sudo -u hdfs /usr/bin/kinit -k -t /etc/security/keytabs/hdfs.headless.keytab
kinit: Client not found in Kerberos database while getting initial credentials
... View more
10-29-2015
07:06 PM
1 Kudo
Due to the wide variety of drive configurations utilized in DataNodes, the failure tolerance for disks is configurable. The dfs.datanode.failed.volumes.tolerated property in hdfs-site.xml allows you to specify how many volumes are lost before the DataNode is marked offline. Once the node is offline, the Namenode will use information in the block reports to create new replicas for any blocks left in an under-replicated state by the failure.
You can tune the HDFS DataNode storage alert warning setting to account for the minimum amount of storage you desire to have available. This will give you a warning before things go critical. Individual disk monitoring is usually handled by an enterprise-level monitoring tool such as OpenView, etc.
... View more
10-26-2015
04:22 PM
@Madhan Neethiraj this is an HDP 2.2.8 install and creating a username that starts with an alphabetic character does work.
... View more
10-23-2015
04:43 PM
It was not a length problem. The user is not a member of the group he is being added into. There is an odd mix of LDAP users and local groups in the test environment. You should enter the length as the answer since that is the question I asked. Then I can accept the answer. Thanks.
... View more
10-23-2015
04:21 PM
The response from the UI is "enter a valid user name". I will try again.
... View more
10-23-2015
04:11 PM
Creating an internal user name of 5 digits is not working correctly.
... View more
Labels:
- Labels:
-
Apache Ranger
10-22-2015
04:06 PM
No. Not my question, but I have done remote repo setups without EPEL and just HDP and HPD-UTILS with no problems.
... View more