Member since
01-22-2016
47
Posts
6
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1209 | 01-19-2017 11:09 AM |
12-24-2018
12:49 PM
1 Kudo
Error stack contains: HTTP Status 403, Message: Forbidden no much details further.
... View more
12-24-2018
11:52 AM
Hello Jagadeesan, The cluster are not multi homed the TIME Line server is providing the required DT . The issue is specific to kms delegation token. Can you please confirm if this is a known issue. I have verified the below as well. https://community.hortonworks.com/content/supportkb/154246/error-orgapachehadoopyarnexceptionsyarnexception-f.html
... View more
12-24-2018
10:08 AM
Version: HDP 2.6.5 Error while running DistCP job. Both clusters are Secure. Same Realm. Error: Failed to renew token: Kind: kms-dt, Service: KMS-SERVER:PORT () Already tried setting mapreduce.job.hdfs-servers.token-renewal.exclude= target cluster and also the specific KMS host name.
... View more
Labels:
- Labels:
-
Apache Hadoop
09-11-2017
01:56 PM
@Nixon Rodrigues The problem was with the initial import where atlas user doesn't have read/execute on /apps/hive/warehouse. Once I added the policy for atlas user and ran the script, I was able to get everything in place.
... View more
09-08-2017
01:27 PM
@Geoffrey Shelton Okot The authorizer class is set to simpleAuthorizer* and why should I have the ranger plugin for kafka enabled ?
... View more
09-08-2017
11:30 AM
Env: HDP 2.6 Kerberos enabled Atlas added as a service post upgrade. Metadata is not visible from Atlas UI. Tried consuming the messages from beginning for the topic ATLAS_HOOK and this doesn't have any messages. However, I can see the list of entities and when queried for the entities, I can see them in kafka-logs (grep with entity name and result shows the recent .log files matches). Ranger plugin for kafka is not eabled and shen searched for ACL's, all users have write access from all hosts for the topic.
... View more
Labels:
09-06-2017
12:33 PM
@Josh Elser Can you help me out. Not sure what it means insufficient permissions. are you referring to the SYSTEM namespace ? I am trying to launch a session using pheonix-sqlline with a under privileged user who has read permission on the ALL the namespaces and tables typically added the public group to the default policy in Ranger. 2. When a user makes the first connection to Phoenix (instantiates the JDBC driver) it will check and try to create the SYSTEM tables if they don't already exist. For all but the first connection, this will be a no-op. If you have permissions put in place, you will want to launch sqlline (or some application using Phoenix) which has the permission to create these SYSTEM tables. Then, before having unprivileged users access phoenix, make sure they have read permission on the system tables. as per above my understanding is any user needs to have full permissions on the system tables while connecting to sqlline for the first time and then just granting read access on the system tables should help him re-establish the session. Also can you please point me to document that can provide information around restricting access via Ranger for Phoenix.
... View more
09-01-2017
11:12 AM
@Josh Elser I have followed the above steps,but, have problem when I revoke the write and create permissions for the user post first login. Env: HDP 2.6 Enabled phoenix.schema.isNamespaceMappingEnabled Kerberos and Ranger Hbase plugin enabled. Post Enabling the property: 1) Added the user to default policy which enables all permissions. 2) login using phoenix-sqlline zookeeper-quorum:2181:/hbase-secure got connected without any issue. 3) logged out and removed the WCA from Ranger policy. Below is the error. Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.security.AccessDeniedException): org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user ‘user@REALM',action: put, tableName:SYSTEM:CATALOG, family:0, column: _0
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.requirePermission(RangerAuthorizationCoprocessor.java:551)
... View more
02-16-2017
10:47 AM
I have been running this by using command line with active Namenode as host name in hdfs path now. Tried hadoop-httpfs. was able to do a get (read). However, creating a collection failed with error No File System for scheme http ( protocol for hadoop-httpfs).
... View more
02-08-2017
08:13 AM
Hello Team, I have a running 2.5.3 cluster. Downloaded the Solr repo and got it added to Ambari as per the documentation available. The SolrCloud is configured to use HDFS and I also see the directory /solr created in HDFS. Below is the startup command that is getting executed, when I start the service from Ambari and the service successfully starts. I issue is see is with -Dsolr.hdfs.home=hdfs://nameservice/solr which should also include the port. How can we modify this. I tried setting the hdfs home property with the complete defaultfs path. hdfs://nameserivce is getting appended in every case. Execute['/opt/lucidworks-hdpsearch/solr/bin/solr start -h solr-host -cloud -z zk1:2181,zk2:2181,zk3:2181/solr -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs -Dsolr.hdfs.home=hdfs://nameservice/solr -p 8983 -m 512m >> /var/log/service_solr/solr-service.log 2>&1'] {'environment': {'JAVA_HOME': '/usr/java/default'}, 'user': 'solr'}
However, not able to create a collection or run the service check.
solr Log says unknown host nameservice.
"ERROR [c:collection1 s:shard2 r:core_node2 x:collection1_shard2_replica1] org.apache.solr.core.CoreContainer (CoreContainer.java:826) - Error creating core [collection1_shard2_replica1]: java.net.UnknownHostException: nameservice"
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Solr