Member since
09-15-2015
75
Posts
33
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1422 | 02-22-2016 09:32 PM | |
2286 | 12-11-2015 03:27 AM | |
8423 | 10-26-2015 10:16 PM | |
7542 | 10-15-2015 06:09 PM |
10-15-2015
03:31 PM
I tried logging in to mysql using the admin/admin account but won't let me in. Also tried root@hostname, same issue.
... View more
10-14-2015
02:42 AM
I was able to sync the LDAP users to Ambari but all of the ldap users are not able to login to Ambari UI. admin/admin local account no longer can login as well. Getting 403 Forbidden errors.
... View more
10-14-2015
01:11 AM
Have a working AD and ldapsearch works from Linux node to AD machine. Trying to setup Ambari to integrate with AD using LDAP with SSl set to 'true' and getting an SSL error. See below. [root@rgarcia-hdp23201 ~]# ambari-server setup-ldap
Using python /usr/bin/python2.6
Setting up LDAP properties...
Primary URL* {host:port} (host:389): host:636
Secondary URL {host:port} (host:389): host:636
Use SSL* [true/false] (true): true
User object class* (user):
User name attribute* (cn):
Group object class* (group):
Group name attribute* (cn):
Group member attribute* (memberUid):
Distinguished name attribute* (dn):
Base DN* (OU=Rommel_Garcia_Accounts,DC=AD-HDP,DC=COM): OU=Rommel_Garcia_Accounts,DC=AD-HDP,DC=COM
Referral method [follow/ignore] (follow):
Bind anonymously* [true/false] (false): false
Manager DN* (CN=adadmin,OU=MyUsers,DC=AD-HDP,DC=COM): CN=adadmin,OU=MyUsers,DC=AD-HDP,DC=COM
Enter Manager Password* :
Re-enter password:
Do you want to provide custom TrustStore for Ambari [y/n] (y)?y
TrustStore type [jks/jceks/pkcs12] (jks):jks
Path to TrustStore file (/etc/ambari-server/keys/ldaps-keystore.jks):/etc/ambari-server/keys/ldaps-keystore.jks
Password for TrustStore:
Re-enter password:
====================
Review Settings
====================
authentication.ldap.managerDn: CN=adadmin,OU=MyUsers,DC=AD-HDP,DC=COM
authentication.ldap.managerPassword: *****
ssl.trustStore.type: jks
ssl.trustStore.path: /etc/ambari-server/keys/ldaps-keystore.jks
ssl.trustStore.password: *****
Save settings [y/n] (y)? y
Saving...done
Ambari Server 'setup-ldap' completed successfully.
You have new mail in /var/spool/mail/root
[root@rgarcia-hdp23201 ~]# ambari-server restart
Using python /usr/bin/python2.6
Restarting ambari-server
Using python /usr/bin/python2.6
Stopping ambari-server
Ambari Server stopped
Using python /usr/bin/python2.6
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start....................
Ambari Server 'start' completed successfully.
[root@rgarcia-hdp23201 ~]# ambari-server sync-ldap --all
Using python /usr/bin/python2.6
Syncing with LDAP...
Enter Ambari Admin login: admin
Enter Ambari Admin password:
Syncing all...ERROR: Exiting with exit code 1.
REASON: Caught exception running LDAP sync. host:636; nested exception is javax.naming.CommunicationException:
host:636 [Root exception is java.net.SocketException: java.security.NoSuchAlgorithmException: Error constructing implementation (algorithm: Default, provider: SunJSSE, class: sun.security.ssl.SSLContextImpl$DefaultSSLContext)]
[root@rgarcia-hdp23201 ~]# ambari-server sync-ldap --all
Using python /usr/bin/python2.6
Syncing with LDAP...
Enter Ambari Admin login: adadmin
Enter Ambari Admin password:
Syncing all.ERROR: Exiting with exit code 1.
REASON: Sync event creation failed. Error details: HTTP Error 403:
host:636; nested exception is javax.naming.CommunicationException:
host:636 [Root exception is java.net.SocketException: java.security.NoSuchAlgorithmException: Error constructing implementation (algorithm: Default, provider: SunJSSE, class: sun.security.ssl.SSLContextImpl$DefaultSSLContext)]
... View more
Labels:
- Labels:
-
Apache Ambari
10-13-2015
02:52 PM
From Ambari 1.7 doc http://docs.hortonworks.com/HDPDocuments/Ambari-1.7.0.0/Ambari_Doc_Suite/ADS_v170.html#ref-f6bcf79a-a84e-4881-bb9d-6c94b2cca79d, why do Oozie user need to have all privileges (GRANT ALL)? Database admins might not want to set it up this way.
... View more
Labels:
- Labels:
-
Apache Oozie
10-12-2015
09:31 PM
Jonas, please remove the internal doc link since AnswerHub will eventually be exposed to the public. Thanks for collaborating on our Security Track!
... View more
10-11-2015
03:07 AM
1 Kudo
I have a 10 node cluster spun up by Cloudbreak in AWS and everything is green in Ambari except all RegionServers didn't start. I can manually start them all successfully but his issue won't help automated setups where it is expected to have all services running before executing jobs. Here's the Event Log from Cloudbreak: 10/10/2015 10:13:23 PM gpawshdp10node - create in progress: Creating infrastructure10/10/2015 10:15:48 PM gpawshdp10node - available: Infrastructure creation took 145 seconds10/10/2015 10:18:48 PM gpawshdp10node - update in progress: Bootstrapping infrastructure cluster10/10/2015 10:19:39 PM gpawshdp10node - update in progress: Setting up infrastructure metadata10/10/2015 10:19:39 PM gpawshdp10node - update in progress: Starting Ambari cluster containers10/10/2015 10:21:14 PM gpawshdp10node - update in progress: Starting Ambari cluster10/10/2015 10:23:30 PM gpawshdp10node - update in progress: Building Ambari cluster; Ambari server ip: 54.153.5.21610/10/2015 10:43:09 PM gpawshdp10node - available: Ambari cluster built; Ambari server ip: 54.153.5.216
... View more
Labels:
10-05-2015
09:53 PM
What HDP version are you using? Did you use the HDP 2.1 documentation to install Kerberos on HDP 2.3? We don't have the manual installation yet for HDP 2.3 for Kerberos but Ambari can be used to Kerberized HDP 2.3 cluster.
... View more
10-05-2015
03:08 PM
1 Kudo
We were recently informed that Nifi bottlenecks tend to be, in order of occurrence: CPU Memory Disk Is there a recommended Java version (Java 7 vs 😎 and garbage collector (Concurrent Mark Sweep vs G1) ?
... View more
Labels:
- Labels:
-
Apache NiFi
-
Cloudera DataFlow (CDF)
10-02-2015
08:01 PM
7 Kudos
Here are some key things that will help an HDInsight cluster manageable and perform better. The following best practices items should be noted.
Do not use only one storage account for a given HDInsight cluster. For a 48 node cluster, Microsoft is recommending 4-8 storage accounts. Not because of the storage space but what each storage account provides additional networking bandwidth that opens up the pipe as wide s possible for the compute nodes to finish their jobs faster. Make the naming convention of the storage account as random as possible, no prefix.
This is to reduce the chances that you hit storage bottlenecks or common mode failures in storage across all storage accounts at the same time. This type of storage partitioning in WASB is meant to avoid storage throttling. Use D13 for head nodes, D12 for worker nodes. When containers are created, make sure to only have one container per storage account. This yields better performance. The Hive metastore that comes by default when HDInsight is deployed is transient. When the cluster is deleted, Hive metastore gets deleted as well. Use Azure DB to store the Hive metastore so that it persists even when the cluster is blown away. Azure DB is basically SQL Server under the hood. Unless the cluster created is brand new every time and won't create the same tables, then Azure DB is not needed. When scaling down the cluster, some services stop and has to be started manually. Scaling should be done when there are no jobs running as much as possible. HDFS namespace recognizes both local storage and WASB storage. It is recommended not to change the Data Node directory in HDFS configuration (that points to the local SSD storage). NameNodes are not exposed from HDInsight so can't use distcp to transfer data from a remote cluster to HDInsight. Use WASB driver as much as possible to transfer data from on-premise cluster to HDInsight cluster since it yields better performance. One thing to note is that only Hadoop services can be stopped. VMs are not exposed and cannot be paused. If the goal is to reduce cost of a running environment, it's better to delete the cluster and recreate them when needed.
... View more
Labels:
- « Previous
- Next »