Member since
12-21-2015
57
Posts
7
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3763 | 08-25-2016 09:31 AM |
07-25-2018
01:30 PM
Hi, I am facing the same issue. Can you pls help me on this.? Error: Could not establish connection to jdbc:hive2://sandbox-hdp.hortonworks.com:8443/;ssl=true;sslTrustStore=/var/lib/knox/data-2.6.5.0-292/security/keystores/gateway.jks;trustStorePassword=knox;transportMode=http;httpPath=gateway/default/hive: HTTP Response code: 500 (state=08S01,code=0) regards Ashokkumar.R
... View more
12-06-2016
05:05 PM
1 Kudo
You can install and configure Hawq using AMBARI .Please have a look into the below document for the same. http://hdb.docs.pivotal.io/210/hdb/install/install-ambari.html Ranger integration is in roadmap and is in progress . https://issues.apache.org/jira/browse/HAWQ-256
... View more
12-01-2016
12:57 PM
1 Kudo
Here's a snapshot.
... View more
11-22-2016
10:32 AM
1 Kudo
Hi @J. D. Bacolod - please see this article I wrote a while ago which explains how Ranger works: https://community.hortonworks.com/content/kbentry/49177/how-do-ranger-policies-work-in-relation-to-hdfs-po.html From HDP 2.5, there is also the potential to Deny access explicitly via a Deny policy. See this article on how to enable them: https://community.hortonworks.com/content/kbentry/61208/how-to-enable-deny-conditions-and-excludes-in-rang.html Hope this helps!
... View more
12-12-2016
06:30 PM
@J. D. Bacolod This error might possibly be originating from the missing data folder. Can you try creating the folder manually and restart the server ? If it still doesn't help, can you capture the TRACE or DEBUG logs ?
... View more
03-02-2017
10:42 AM
Hey @J. D. Bacolod, did you solve your issue ? i have the same problem with solr when i want to start it. but when i hit restart all te solr symbol turns green and i can enter the UI. but there i get the next error. collection1_shard1_replica1:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Index dir
'hdfs://sandbox.hortonworks.com:8020/solr/collection1/core_node2/data/index/'
of core 'collection1_shard1_replica1' is already locked. The most
likely cause is another Solr server (or another solr core in this
server) also configured to use this directory; other possible causes may
be specific to lockType: hdfs
tweets_shard1_replica1:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Index dir
'hdfs://sandbox.hortonworks.com:8020/solr/tweets/core_node1/data/index/'
of core 'tweets_shard1_replica1' is already locked. The most likely
cause is another Solr server (or another solr core in this server) also
configured to use this directory; other possible causes may be specific
to lockType: hdfs
collection1_shard2_replica1:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Index dir
'hdfs://sandbox.hortonworks.com:8020/solr/collection1/core_node1/data/index/'
of core 'collection1_shard2_replica1' is already locked. The most
likely cause is another Solr server (or another solr core in this
server) also configured to use this directory; other possible causes may
be specific to lockType: hdfs
Anyone have an idea ? Best Regards, Martin
... View more
09-19-2016
02:18 AM
Assuming that I installed four instances of PostgreSQL 9.3, and what if, for example, the Ranger database fails, it means failure also for HDFS and Hive security (among others). So these components (Ambari, Hive, Oozie, Ranger) are not entirely independent to warrant that a failure in their respective databases means the other will be operating smoothly. Someone suggested to me to do a single database instance for all four of the services in High Availability mode (master-slave, with warm-standby) or in multi-node cluster, four separate database instances (same distro and version presumably) in High Availability. Although for inexperienced DB admin (like me), this is quite a chore. As I have read from PostgreSQL documentation, there are a number of solutions for High Availability mode, like Shared Disk Failover, Transaction Log Shipping, etc. What solution did you employ for PostgreSQL HA. Can those who have done this in production cluster share how you did this? @Sunile Manjee
... View more
09-16-2016
05:00 AM
1 Kudo
@J. D. Bacolod i would suggest you should with 2. because with option 2 you will have 1.more control to the operations , 2.there will be less mistakes (negligible) , 3. you and will be away from mis-configuration headaches. 4.Daily operations and configurations will be quick and simple. Please find below documentation from hortonworks for automated installation of HDP using ambari. please go through all the steps carefully. https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_Installing_HDP_AMB/content/index.html if you want to go to the part where you want to install multiple nodes then goto step 3. Installing, Configuring, and Deploying a HDP Cluster if this answers your query please accept the answer
... View more
09-01-2016
10:09 AM
1 Kudo
@J. D. Bacolod Refer below thread https://community.hortonworks.com/questions/21955/create-new-hive-user.html
... View more
09-01-2016
02:37 AM
1 Kudo
The answer is it all depends on how YARN is setup for queues. All tools(sqoop, pig, hive) have a way of specifying queue via command line (example) If you are using HUE it can even be setup to impersonate your user. So you really do need to understand how yarn is setup for queuing. You don't need to configure the queue if yarn isn't configured for queues. If it is then you have to read the configuration to know what will happen.
... View more
- « Previous
-
- 1
- 2
- Next »