Member since
08-04-2016
31
Posts
3
Kudos Received
0
Solutions
10-08-2018
10:02 PM
We were able to solve this by locating one of the Cipher Suite which is originally marked as 'disabled' in ambari.properties, and enabled the same by removing the ambari.properties and restarting the server. First tried with removing the ciphers.disabled properties(take a backup), and then restart ambari-server. Used Openssl command to connect to the ambari-server on https port. Identified which cipher suite is being used to establish connection, and then located the corresponding RFC cipher mapping for the cipher suite and removed that in the list of cipher suites listed on the ciphers.disabled property in ambari.properties file.
... View more
09-25-2018
05:57 PM
@Jay Kumar SenSharma I am having a similar issue on Centos 7.x. Please see https://community.hortonworks.com/comments/222163/view.html. (scroll all the way down to the last comment for exact details of the issue) Ambari-server is up and running, but Ambari Server UI is not accessible. My curl request to the https://<ambari-server host>:8xxx fails with the below message: * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * NSS error -5938 (PR_END_OF_FILE_ERROR) * Encountered end of file * Closing connection 0 curl: (35) Encountered end of file Though all the ambari-agents in the cluster are updated to force the use of TLSV1_2, no luck. Any thoughts? Thanks in advance!
... View more
09-19-2018
08:02 PM
Hello, I am running into the same issue. We use Centos 7.5 version on our HDP cluster nodes and RHEL updates ran on the ambari-server few days ago, and now I could not access our ambari-server's URL in the browser, as the connection gets dropped. And I see the following in the Ambari-server's logs: javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is disabled or cipher suites are inappropriate) And when I run curl to the Ambari-Server's HTTPS url: * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * NSS error -5938 (PR_END_OF_FILE_ERROR) * Encountered end of file * Closing connection 0 My current environment: Ambari Server 2.4.0 for HDP 2.5.0. Python version 2.7.5-69.el7_5 and Openssl version was updated in this server update, to 1.0.2k-12.el7 Should I downgrade to a different build of Python 2.7.5? I always had Python Cert verification disabled, that did not help. Other Options tried: a) Updated the /etc/ambari-server/conf/ambari.properties with the following line: security.server.disabled.protocols=SSL|SSLv2|SSLv2Hello|SSLv3|TLSv1 b) Updated the /etc/ambari-agent/conf/ambari-agent.ini with the following line: force_https_protocol=PROTOCOL_TLSv1_2 Restarted the Ambari-server and the agent after the above update to configuration files, still not working. Kindly please advise.
... View more
09-14-2018
01:51 AM
So I did a few config changes and restarted mysql couple of times, but not sure which one fixed it. Started with Ambari 2.7.0.0 installation on a fresh Centos 7.5 vm Installed the MariaDB 10.2 packages as described in the install document and in addition installed the MariaDB-devel as well. Increased the following as per the above Ambari Server Tuning documentation link: 'agent.threadpool.size.max' in /etc/ambari-server/conf/ambari.properties to 50 Increased the 'client.threadpool.size.max' in /etc/ambari-server/conf/ambari.properties to 100 Installed the mysql-connector-java version 8.0.12 using the commands given above (same available in Ambari documentation) and ran 'ambari server-setup' with the required options and started Ambari Server. At this point, I was still getting the C3P0 deadlock errors. Then I restarted mysql and ambari-server few times (didnot make any config updates for mysql though; and no changes to any of the Ambari Connection Pool tuning parameters); at one point, I noticed an error related to timezone 'EDT' not being recognized, so to fix that, I executed the following in MariaDB: set GLOBAL time_zone ='-5:00' And then restarted Ambari server again, and it started up. So I am not sure which of the above fixed this. Once I get HDP 3.0 installed successfully, I will probably retry the whole installation to narrow down the fix. Thank you very much for all of your inputs.
... View more
09-13-2018
10:36 PM
Okay, I will try this option as well, once I finish up with rest of the installation. Currently we have hdp 2.5 running which is configured with mariadb, hence I tried that first. Thank you.
... View more
09-12-2018
10:04 PM
Not yet, same C3P0 errors. I will redo a fresh installation again. Will keep posted.
... View more
09-07-2018
07:53 PM
Hi @Akhil S Naik,
I have restarted Ambari server plenty of times. There is no 'Caused by' section in the stack trace. Adding to the above error logs, I see 3 PoolThreadStacktrace for 3 helper threads and followed by a timeout message. Please find below: Thread[C3P0PooledConnectionPoolManager[identityToken->1bqv1ac9x130woy214hz5t|66982506]-HelperThread-#0,5,main]
java.net.PlainSocketImpl.socketConnect(Native Method)
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
java.net.Socket.connect(Socket.java:589)
java.net.Socket.connect(Socket.java:538)
java.net.Socket.<init>(Socket.java:434)
java.net.Socket.<init>(Socket.java:244)
com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:259)
com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:307)
com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2484)
com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2521)
com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2306)
com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:839)
com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:49)
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:421)
com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:350)
com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:175)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:220)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:206)
com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:203)
com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1138)
com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1125)
com.mchange.v2.resourcepool.BasicResourcePool.access$700(BasicResourcePool.java:44)
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1870)
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:696)
WARN [C3P0PooledConnectionPoolManager[identityToken->1bqv1ac9x130woy214hz5t|66982506]-AdminTaskTimer] ThreadPoolAsynchronousRunner:220 - Task com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask@5d1f3439 (in deadlocked PoolThread) failed to complete in maximum time 60000ms. Trying interrupt().
WARN [C3P0PooledConnectionPoolManager[identityToken->1bqv1ac9x130woy214hz5t|66982506]-AdminTaskTimer] ThreadPoolAsynchronousRunner:220 - Task com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask@1204c5e (in deadlocked PoolThread) failed to complete in maximum time 60000ms. Trying interrupt().
[C3P0PooledConnectionPoolManager[identityToken->1bqv1ac9x130woy214hz5t|66982506]-AdminTaskTimer] ThreadPoolAsynchronousRunner:220 - Task com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask@38cf7e05 (in deadlocked PoolThread) failed to complete in maximum time 60000ms. Trying interrupt().
2018-08-30 17:08:56,973 WARN [C3P0PooledConnectionPoolManager[identityToken->1bqv1ac9x130woy214hz5t|66982506]-AdminTaskTimer] ThreadPoolAsynchronousRunner:220 - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@f3bced7 -- APPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks
... View more
09-07-2018
06:26 PM
Hi @sjanardhan, I will review the ambari-server tuning article for the ThreadPool related settings. Thanks for the link. This is a fresh installation of Ambari 2.7.0.0, but I noticed I am using a Centos7.5 image; Per the support matrix, it is supported until centos7.4 only; not sure if the above issue of C3P0 Deadlocks might be caused due to the Centos Versions..
... View more
09-05-2018
08:53 PM
Trying to install Ambari 2.7.0.0 on a Centos7 VM manually. Per the s/w compatibility chart, I am using MariaDB 10.2 and the latest version of mysql-connector-java.jar (8.0.12-1.el7) Ambari server appears to be starting, but I get the following error message: WARN [C3P0PooledConnectionPoolManager[identityToken->1bqv1ac9x1bj2mu81ac8bar|66982506]-AdminTaskTimer] ThreadPoolAsynchronousRunner:220 - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@14dccacc -- APPARENT DEADLOCK!!! Complete Status: Managed Threads: 3 Active Threads: 3 Active Tasks: com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask@5b263c1c on thread: C3P0PooledConnectionPoolManager[identityToken->1bqv1ac9x1bj2mu81ac8bar|66982506]-HelperThread-#0 com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask@295e071e on thread: C3P0PooledConnectionPoolManager[identityToken->1bqv1ac9x1bj2mu81ac8bar|66982506]-HelperThread-#1 com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask@6ab7a61c on thread: C3P0PooledConnectionPoolManager[identityToken->1bqv1ac9x1bj2mu81ac8bar|66982506]-HelperThread-#2 Pending Tasks: com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask@5125c3b7 com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask@3b09c353 Pool thread stack traces: Thread[C3P0PooledConnectionPoolManager[identityToken->1bqv1ac9x1bj2mu81ac8bar|66982506]-HelperThread-#0,5,main] java.net.PlainSocketImpl.socketConnect(Native Method) java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) java.net.Socket.connect(Socket.java:589) -------------------------- Any thoughts of what is missing here.
... View more
Labels:
- Labels:
-
Apache Ambari
03-14-2018
04:15 AM
I have Zeppelin 0.7 configured as a separate service outside of Ambari, but livy sessions are not getting created and crashing. Could someone share the livy interpreter configuration to get this working in Zeppelin 0.7? Thanks in advance!
... View more
09-22-2017
08:36 PM
@Brazelton Mann I do get the same error as well, did you find a way to resolve this. Thanks.
... View more
09-13-2017
09:37 PM
We have the Zeppelin version 0.6.2 installed and running part of the HDP 2.5 stack. We noticed that we could not delete any paragraphs from the zeppelin notebooks created. I do see a fix available for this bug: https://issues.apache.org/jira/browse/ZEPPELIN-1033 Is there a way that I can apply a hotfix for zeppelin using Ambari or from the backend? I understand the easier solution is to upgrade zeppelin to 0.7, but currently we are not ready to do the upgrade the HDP stack yet. Any suggestions.
... View more
Labels:
- Labels:
-
Apache Zeppelin
09-07-2017
07:18 PM
Thanks for the instructions. I have a HDP 2.5 cluster and want to move or create all the collection configuration to HDFS directory, instead of local disk. The config you have above is to update the solrconfig.xml for each collection and this works, but is there a way to update the entire thing from Ambari Console by updating the infra-solr-env-template? Thanks in advance for your input.
... View more
09-01-2017
09:08 PM
Follow this article https://community.hortonworks.com/articles/88259/set-time-to-live-ttl-on-solr-records.html to set the values for the fields _ttl_ and _expire_at_ fields in the solrconfig.xml and related files and upload them to zookeeper and restart ambari-infra-solr services. That will help reducing the disk space usage according to the value of TTL setting.
... View more
08-03-2017
08:03 PM
Thanks for your article on setting TTL for Solr documents.
However, in my environment, I have Ambari Infra-solr auto created cores for hadoop logs that are taking
up disk space. I followed the above and updated the managed-schema and solrconfig.xml under
/usr/lib/ambari-infra-solr/server/solr/configsets/data_driven_schema_configs/ I used Ambari Dashboard to restart Ambari Infra Solr and Zookeeper services instead of manually starting Solr using your above command. How would we
know if Zookeeper and Solr picked up these settings. Thanks Anitha
... View more
07-24-2017
09:30 PM
I am running into errors from Ambari-infra-solr in HDP 2.5 with a Kerberized and SSL enabled cluster. I noticed that your steps have a separate keytab for solr-spnego. Is this mandatory to do this way? SOLR_KERB_KEYTAB=/etc/security/keytabs/solr-spnego.service.keytab The errors I have are: SASL configuration failed: javax.security.auth.login.LoginException: Pre-authentication information was invalid (24) Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it and '401 Authentication required' Please let me know what I am missing here.
... View more
07-21-2017
08:01 PM
No, I have been having lot of issues with Ambari Infra Solr, Kerberos, and HTTPS Logsearch UI. Still researching...
... View more
07-21-2017
07:59 PM
Thanks for the instructions. I have a Kerberos and HTTPS HDP 2.5 cluster, and after Kerberizing, I see errors in the Ambari infra logs of both the nodes that it could not replicate the index between the solr nodes. Is this related to the above steps?
**Note: I am able to access the Solr UI of both the Ambari infra solr nodes, though.
Errors:
ERROR [c:hadoop_logs s:shard3 r:core_node1 x:hadoop_logs_shard3_replica1] org.apache.solr.update.StreamingSolrClients$1 (StreamingSolrClients.java:79) - error
org.apache.solr.common.SolrException: Authentication required
and when I restart the Ambari-Infra solr services, I do see the below error as well:
Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 401 Authentication required</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /solr/vertex_index_shard1_replica1/get. Reason:
<pre> Authentication required</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
Any inputs on how to resolve this? Recently, I updated the LogSearch UI to HTTPS and that is unable to connect to the Solr instances .. Thanks in advance.
... View more
06-29-2017
04:52 PM
Any one else faced the same issue with WebHcat trying to write to Hive Tables on a kerberized cluster?
... View more
06-23-2017
04:39 PM
I have a Kerberized and HTTPS enabled HDP 2.5 cluster, and we are trying to run a Hadoop Hive task from SSIS. I understand SSIS uses WebHCat to run Hive queries. A sample CSV file is uploaded into HDFS, a Hive table created separately, trying to insert data into this Hive table using a simple Inline script option below: ‘load data INPATH <HDFS Filename> OVERWRITE into table <table_name>’
When I execute the SSIS package, I get the error in webhcat.log and hivemetastore.log as below: Caused by: MetaException(message:User: HTTP/_HOST@REALM is not allowed to impersonate <username>)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_delegation_token_result$get_delegation_token_resultStandardScheme.read(ThriftHiveMetastore.java)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_delegation_token_result$get_delegation_token_resultStandardScheme.read(ThriftHiveMetastore.java) From various other online references I updated, the following proxy settings in the core-site.xml hadoop.proxyuser.HTTP.groups=* (Note: Originally this field had ‘users’, changed to *)
hadoop.proxyuser.HTTP.hosts=* In webhcat-site.xml, I have the following as well: webhcat.proxyuser.hcat.groups=*
webhcat.proxyuser.hcat.hosts=*
webhcat.proxyuser.HTTP.hosts=*
webhcat.proxyuser.HTTP.groups=*
I am getting the same error from the webhcat.log, with the curl command: curl -i --negotiate -u : 'http://<web-hcat-server-name>:50111/templeton/v1/ddl/database/default'
In my SSIS Hadoop connection manager, I have WebHcat Connection enabled, selected Kerberos as Authentication and gave my AD username and password. And the Test Connection verification from Hadoop Connection Manager to WebHcat says succeded, but fails when running the Hadoop Hive Task.
... View more
Labels:
- Labels:
-
Apache Hive
06-23-2017
03:41 PM
Thank you for clarifying my questions in detail. I switched back to using ambari- server generated certificates for the 'two_way_ssl setup' for now, as the ca-signed certificate we have are SAN certificates i.e.,1 certificate with mulitple SAN for all nodes. When I tried using them, ambari server could not pick up the CA signed agent's cert, because the node name is not present the 'Subject' field of the certificate but instead in the extensions under the SAN names attribute of the certificate. Due to this scenario, I reverted to using the ambari-server signed certificates on all the nodes. Next time, we will get individual certificate for all the nodes with the name in the 'Subject' field to avoid these type of issues.
... View more
06-23-2017
03:25 PM
Thank you for the reply. I switched to using ambari generated certs for all instead. The instructions were helpful. Thank you.
... View more
06-01-2017
02:31 PM
***Update - I switched back to using Ambari's generated certificate for agents and the server, as I was getting SSL errors related to having the certs not signed by the same 'CA'. Is this because I was using the self-signed certificate for testing locally? I havent' tried this with the CA signed multiple SAN certificate. Also, while comparing the 'Subject Name' on the certificate generated by the Ambari server and the multiple 'Subject Alternative Name' certificate I intended to use originally, the 'Subject name's would have caused discrepancy. Looks like ambari server looks for the node name in the Subject line, but in the SAN certificate I have, the names of the nodes as part of the 'V3 extensions' in the certificate. If you have any suggestions for this scenario, please post. Thanks.
... View more
05-30-2017
05:49 PM
I tried your steps above, but the ambari-server generates certificates on the agent nodes. To give some context, I have a single certificate with multiple 'subject alternative names' for all the nodes in the cluster. I put that 1 certificate under the /var/lib/ambari-agent/keys folder on all the agents and as soon as I restart Ambari-server, it still does not pick up my '.crt' instead it begins generating the .key, .csr and .crt. My goal is to use the .crt I have to be used by the agent and server on all the nodes for the two_way_ssl functionality. Please advise.
... View more
05-29-2017
06:17 PM
1 Kudo
We have a Kerberized HDP 2.5.0 cluster, and recently I have converted the services to use HTTPS. I have the Ambari Infra Solr server instances running on 2 nodes of the total x nodes in the cluster. 1. First, I could access only UI for one of the Ambari Infra Solr Admin UI. Though I have a Kerberos ticket for HTTP SPNEGO Authentication, I get below error when I navigate to the other Ambari Infra Solr Admin UI via the Quicklink in Ambari console. org.apache.hadoop.security.authentication.util.SignerException: Invalid signature 2. Second, I do see the below errors in the ambari infra solr logs on both the instances: <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 401 Authentication required</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /solr/audit_logs_shard2_replica1/update. Reason: and audit_logs_shard0_replica1 s:shard0 c:audit_logs r:core_node4] ERROR [c:audit_logs s:shard0 r:core_node4 x:audit_logs_shard0_replica1] org.apache.solr.common.SolrException (SolrException.java:148) - org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: I understand this is related to Kerberos. Any thoughts of what I might be missing?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Solr
05-29-2017
05:49 PM
I am trying to configure and enable two-way ssl communication between the ambari-server and agents. I understand that just enabling the setting ‘security.server.two_way_ssl=true’ in the /etc/ambari-server/conf/ambari.properties will auto generate the certs on the nodes and ambari server acts as a CA to sign it. We have CA-signed certs for these servers, and I was trying to follow the steps listed in this article:
https://community.hortonworks.com/articles/66860/enable-two-way-ssl-between-ambari-server-and-ambar.html The CA-signed certificate I have is just certificate with multiple SAN names for all the nodes in the cluster. According to the above article, if we want to use CA certs for both the server and agent, I have to copy the .crt and .key file to /var/lib/ambari-server/keys and /var/lib/ambari-agent/keys folder on all the servers and agent nodes. Though I have secured the Hadoop services in the HDP Cluster using the CA-signed certs, we have the truststore and keystores in the /etc/security/serverKeys folder and the 'private key file' itself is not present on all the nodes. But the two-way ssl requires the private key file to be present on the agent nodes. My question is, is having the private key file on all the agent nodes is riskier? Or should I just go with using, the CA-signed cert/ key only for the ambari-server side and have the agents get a auto-generated key pair and cert signed by the Ambari server? If I do self-signed certs for the this two_way_ssl setup, then we have 2 sets of certificates used and maintained on the cluster ie., one set used by the services and one self-signed auto-generated by the ambari server. P.S.: The cluster does not have Ranger yet, it is Kerberized with AD and HTTPS enabled for the Hadoop services and Ambari server.
Please advise.
... View more
Labels:
08-22-2016
02:53 PM
1 Kudo
Is there a published information on Hortonworks, about the future support for Windows HDP installations?
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)