Member since
10-01-2015
52
Posts
25
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1060 | 09-29-2016 02:09 PM | |
440 | 09-28-2016 12:32 AM | |
1975 | 08-30-2016 09:56 PM |
07-12-2017
03:20 PM
1 Kudo
Thanks vperiasamy for Suggestion - I was not able to respond timely - I use the same trick to remove/add Ranger Plugins to chk on false artifacts/cache causing issues.
The problem here was, when we deleted Kafka via Ambari and reinstalled it - it never created the links to ranger_*-kafka-plugin*, although the service was showing as installed. So i removed ranger_*-kafka-plugin* via yum and reinstalled it - and the links got created. And its working fine now. Thanks
Mayank
... View more
07-11-2017
08:38 PM
Hello gurus, We were facing issue with Kafka and decided to re-installed service (removed via Ambari)
Now the kafka brokers refuse to start with following errors.
FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.ClassNotFoundException: org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer I'm out of ideas on where to look and what jars.
my /etc/kafka and /usr/hdp/current paths are good. Thanks in advance.
Mayank
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Ranger
10-05-2016
07:46 PM
2 Kudos
We tried 3 scenarios and all failed. 1) Creating Hive Table on HBase. (proper hive/hbase polices for user)
hive> CREATE TABLE hbase_table_1(key int,
value string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH
SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
TBLPROPERTIES ("hbase.table.name" = "xyz",
"hbase.mapred.output.outputtable" = "xyz");
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask.
MetaException(message:org.apache.hadoop.hbase.security.AccessDeniedException:
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient
permissions for user 'abc@ABC.COM' (action=create)
at
org.apache.ranger.authorization.hbase.AuthorizationSession.publishResults(AuthorizationSession.java:254)
at
org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.authorizeAccess(RangerAuthorizationCoprocessor.java:595)
at
org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.requirePermission(RangerAuthorizationCoprocessor.java:664)
at
org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.preCreateTable(RangerAuthorizationCoprocessor.java:769)
at
org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.preCreateTable(RangerAuthorizationCoprocessor.java:496)
at
org.apache.hadoop.hbase.master.MasterCoprocessorHost$11.call(MasterCoprocessorHost.java:216)
at
org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:1140)
at
org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateTable(MasterCoprocessorHost.java:212)
at
org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1533)
at
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:454)
at
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55401)
at
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at
org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
2) Create Table directly on Hbase hbase(main):001:0> create 'emp', 'personal', 'professional'
ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user 'abc@ABC.COM' (action=create) 3) Using Grant from HBase Shell
hbase(main):001:0> grant 'abc', 'RWCA'
ERROR: org.apache.hadoop.hbase.coprocessor.CoprocessorException: SSLContext must not be null
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.grant(RangerAuthorizationCoprocessor.java:1171)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.grant(AccessControlProtos.java:9933)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10097)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7553)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1878)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1860)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745) The easy solution we tried is to disable SSL and it worked.
We also tried creating different policies and policy on namespace 'default:*' worked and users were ONLY able to create the TABLE.
They were NOT able to scan the table and issue (3) still had same problem. "
ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user abc@ABC.COM',action: scannerOpen, tableName:hello1, family:col1.
" Regional Servers were playing a important role here on Ranger Policies.
While HBase Master was able to wirte to Policy Cache the cache for Regional Servers was 0 bytes.(communication blocked) We distributed the keystore and truststore from HBase Master to all the worker nodes running Regional Server and restarted Hbase and this solved the issue. How to do? Ambari-Hbase-Config- Filter "SSL"
1) xasecure.policymgr.clientssl.keystore - Find the /path/keystore.file.name and distribute it to all the machines (keep the path/file name same) 2) xasecure.policymgr.clientssl.truststore - - Find the /path/truststore.file.name and distribute it to all the machines (keep the path/file name same) Hope this will help. Thanks
Mayank
... View more
- Find more articles tagged with:
- Issue Resolution
- Ranger
- ranger-hbase
- Security
- ssl
Labels:
09-29-2016
02:09 PM
Figured it out, I generated a new cert and used Signature Algorithm as sha256RSA (signature hash algorithm as sha256) however the ones I had earlier were SHA1RSA and SHA1 respectively. Seems like ShA1 is week but IE doesn't seem to care, Chrome was not happy about it. If it's a internal ONLY cluster and you are using a local CA authority (internal or self sign) you can still live with Sha1. You will still achieve Secure TLS connection and Secure Resources however with a warning This page is insecure (broken HTTPS). Hope this helps and thanks community to think. Regards
Mayank
... View more
09-27-2016
11:16 PM
We have set a Once way trust from AD to KDC and validated that we were able to get tickets from AD. Kerberos Realm - ABC.NET
AD Domain - XZY.COM Validation kinit ad_user@XYZ.COM List Showed the valid ticket. Now in just few hrs the trust is broken the same command, kinit ad_user@XYZ.COM, gives kinit: Cannot find KDC for realm "XYZ.COM" while getting initial credentials. Thanks in advance. Just want to hit myself on the face.
... View more
Labels:
- Labels:
-
Kerberos
09-23-2016
03:21 PM
Works for IE, however still broken for Chrome. Any advices/help is appreciated.
... View more
09-13-2016
02:47 PM
Ranger 0.5.0.2.4 Will try your suggestion, however since the org error was ((actual: 2465, maximum:2000), do we still need to take it to 4000 thanks
... View more
09-13-2016
02:24 PM
1 Kudo
We are running Ranger on Oracle.
Faced a common error (NN logs and Hive logs) ORA-12899: Value too large for column "RANGERDBA"."XA.AUDIT"."REQUEST.DATA" (actual: 2465, maximum:2000) We changed the REQUEST_DATA to VARCHAR2(3000) from VARCHAR2(2000) Now the new error is ORA-12899: Value too large for column "RANGERDBA"."XA.AUDIT"."REQUEST.DATA" (actual: 3465, maximum:3000)
This a a fresh Installation on Oracle only and not migrated from MySql.
In my past exp we had the same issue where the DB was migrated from MySql and there were some NON ASCII chars .. we fixed it by changing to VARCHAR2 (2000 CHAR) from VARCHAR2(2000). Will the same solution be able to fix this.
What can cause this issue. Thanks
Mayank
... View more
Labels:
- Labels:
-
Apache Ranger
09-08-2016
06:29 PM
We are using a single wildcard cert provided by enterprise CA. Thanks
Mayank
... View more
09-08-2016
06:08 PM
While securing Ambari Sever for Https, we can successfully login to https and default port 8443, however the https is stoked out and says This page is insecure (broken HTTPS). We are using wildcard certs initially in .cer format however have to convert it to .pem format using openssl. What is the preferred format and encryption for the Certs. The current error says 1) SHA-1 Certificate
The certificate for this site expires in 2017 or later, and the certificate chain contains a certificate signed using SHA-1. 2) Certificate Error
There are issues with the site's certificate chain (net::ERR_CERT_COMMON_NAME_INVALID). Thanks Mayank
... View more
Labels:
- Labels:
-
Apache Ambari
08-30-2016
09:56 PM
Found it. I would have left it like that however since it in Production and sometimes you just need to know. For replication, we Falcon needs to be aware of local and remote Cluster ID via below property.
dfs.nameservices mine has value like "prodID, devID" What balancer does is it tries to reach both the nameservices and if I'm running the command in prod cluster as hdfs with proper ticket, it will throw a errror "hdfs-prod" (which is my principal w/o REALM) however it is still balancing the prod cluster, so the error although not clear is actually permission denied on remote name service (which makes sense) as the user still is "hdfs" however the principal is different "hdfs-dev" in my case. I ran the same command in Dev and cluster wwas rebalanced however I got the same error, this time Access denied for user hdfs-dev. Superuser privilege is required. Thanks for support @emaxwell, @mqureshi, @Kuldeep Kulkarni. I hope above answer will help others. (few of other hdfs commands have no effect/errors) Thanks Mayank
... View more
08-30-2016
08:30 PM
thanks @mqureshi , As stated above I'm running the command as "hdfs" , "hdfs-prod" is my principal name w/o the realm name. This user is a part od superusergroup and all the mapping are showing ryt. I'm sure its something which misses the eye. Regards Mayank
... View more
08-30-2016
07:39 PM
Thanks @emaxwell, I'm running those commands as HDFS user, the 'hdfs dfsadmin -safemode' works fine. I can preety much use the balancer from Ambari but super curious to know what went wrong here. Thanks Mayank
... View more
08-30-2016
06:48 PM
thanks for pointer @Kuldeep Kulkarni however the rules are set properly.
I'm pretty much able to do everything with the key tab but the balancer command. Thanks Mayank
... View more
08-30-2016
06:38 PM
1 Kudo
Hello experts, While running balancer utility as HDFS user i'm getting below error. HDFS Balancer - Access denied for user hdfs-prod. Superuser privilege is required Note - I'm running as user hdfs against a kerberized cluster. My principal name is hdfs-prod@ABC.NET dfs.permissions.superusergroup = hdfs and hdfs groups hdfs shows hdfs : hadoop hdfs I'm sure I'm missing something here. Thanks Mayank
... View more
Labels:
- Labels:
-
Apache Hadoop
08-15-2016
02:12 PM
1 Kudo
Hello Gurus, One of our clients is asking for Zepplin and Kerberized the cluster.
Below are my doubts (I have searched the forums however not found anything solid) 1) What is the current state of security around Zeppelin
2) Can we run Zellpin in a Kerberized environment.
Thanks
Mayank
... View more
Labels:
- Labels:
-
Apache Zeppelin
07-20-2016
06:22 PM
1 Kudo
Hello experts, I'm trying to understand the total size / block size used by HDFS Snapshot. I have a dir like /user/x/data and a hdfs ls tells me it has 1.1 TB So If I take a snapshot of /user/x/data will the snapshot consumes same space and how much block size is used by it. My earlier output from hdfs dfsadmin -report was 19.6 TB and after taking snapshot it was still same. If snapshots takes same space as of the source why the report does't changes. Thanks
Mayank
... View more
- Tags:
- Hadoop Core
- snapshot
Labels:
- Labels:
-
HDFS
06-08-2016
02:58 PM
Thanks @emaxwell I hope this will help most of us, specially the ones using MS AD KDC. Regards
Mayank
... View more
06-08-2016
02:21 PM
Hello experts, I feel confident with Kerberos Authentication, however a recent article has created a panic among few customers, I would like to understand if there is a real threat and how others are thinking through it. http://news4security.com/posts/2015/12/old-microsoft-kerberos-vulnerability-gets-new-spotlight/ The article talks about various ways to attack Kerberos and obtain or pass forged tickets.
Would be real helpful if security experts can clear the air, specially what these threat means in hadoop world (if any) Thanks Mayank
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Kerberos
-
Security
04-07-2016
07:56 PM
Yes, and restart it. Should be fine. Thanks!
... View more
03-22-2016
09:02 PM
1 Kudo
Today client is using couple of staging/ftp servers but want to know if there are other practices, all the data is in HDFS.
... View more
03-22-2016
05:37 PM
1 Kudo
What are the best practices around copying data between two clusters located in different datacenter on different LAN, the scope is to limit loops.
... View more
Labels:
- Labels:
-
Apache Hadoop
01-27-2016
04:24 PM
@Gerd Koenig I had same doubts thanks for confirming, can you share something on putting NM to diff, config groups at your leisure.
... View more
01-27-2016
04:23 PM
1 Kudo
@Artem Ervits checked all of those and does not seems to be an issue
... View more
01-27-2016
05:04 AM
1 Kudo
One of our clients have asked us to move the prefixed log location to a different mounted point for all the service logs, for example the prefix log location for hdfs is moved from /var/log/hadoop to /hdp/logs/hadoop via api calls. Everything restarted smoothly however only one NM is coming up out of 5, and a manual restart only works on the first NM. All other NM are through the same error, below; STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r ef0582ca14b8177a3cbb6376807545272677d730; compiled by 'jenkins' on 2015-12-16T03:01Z
STARTUP_MSG: java = 1.7.0_67
************************************************************/
2016-01-26 15:01:25,155 INFO nodemanager.NodeManager (LogAdapter.java:info(45)) - registered UNIX signal handlers for [TERM, HUP, INT]
2016-01-26 15:01:26,283 INFO recovery.NMLeveldbStateStoreService (NMLeveldbStateStoreService.java:initStorage(927)) - Using state database at /hdp/logs/hadoop-yarn/nodemanager/recovery-state/yarn-nm-state for recovery
2016-01-26 15:01:26,313 INFO service.AbstractService (AbstractService.java:noteFailure(272)) - Service org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService failed in state INITED; cause: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock /hdp/logs/hadoop-yarn/nodemanager/recovery-state/yarn-nm-state/LOCK: Resource temporarily unavailable
org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock /hdp/logs/hadoop-yarn/nodemanager/recovery-state/yarn-nm-state/LOCK: Resource temporarily unavailable
at org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.initStorage(NMLeveldbStateStoreService.java:930)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService.serviceInit(NMStateStoreService.java:204)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartRecoveryStore(NodeManager.java:178)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:220)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:537)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:585)
2016-01-26 15:01:26,316 INFO service.AbstractService (AbstractService.java:noteFailure(272)) - Service NodeManager failed in state INITED; cause: org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock /hdp/logs/hadoop-yarn/nodemanager/recovery-state/yarn-nm-state/LOCK: Resource temporarily unavailable
org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock /hdp/logs/hadoop-yarn/nodemanager/recovery-state/yarn-nm-state/LOCK: Resource temporarily unavailable
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartRecoveryStore(NodeManager.java:178)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:220)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:537)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:585)
Caused by: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock /hdp/logs/hadoop-yarn/nodemanager/recovery-state/yarn-nm-state/LOCK: Resource temporarily unavailable
at org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.initStorage(NMLeveldbStateStoreService.java:930)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService.serviceInit(NMStateStoreService.java:204)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 5 more
2016-01-26 15:01:26,317 FATAL nodemanager.NodeManager (NodeManager.java:initAndStartNodeManager(540)) - Error starting NodeManager
org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock /hdp/logs/hadoop-yarn/nodemanager/recovery-state/yarn-nm-state/LOCK: Resource temporarily unavailable
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartRecoveryStore(NodeManager.java:178)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:220)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:537)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:585)
Caused by: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock /hdp/logs/hadoop-yarn/nodemanager/recovery-state/yarn-nm-state/LOCK: Resource temporarily unavailable
at org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.initStorage(NMLeveldbStateStoreService.java:930)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService.serviceInit(NMStateStoreService.java:204)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 5 more
2016-01-26 15:01:26,319 INFO nodemanager.NodeManager (LogAdapter.java:info(45)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NodeManager at bvluxhdpdn05.conocophillips.net/158.139.121.115
************************************************************/ " As we can see it is not complaining about LOCK file bot present but unavailable, as whichever NM starts first acquire this LOCK (remember this is a single mount point and not local file-system) If I change the log location back to local file-system even for example /tmp/yarnlogs its works smooth since all the NM get access to LOCK file on local file-system where ever they are installed. Has someone faces this issue and can you please suggest a fix to this. Thanks
Mayank
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
01-26-2016
04:55 PM
Few weeks back we were working with a customer and configured Solr for Ranger, customer decided to skip Solr until it came as a GA feature with Ambari, after uninstalling and removing the Solr properties customer has upgraded Ambari and HDP stack, now the ambari-server logs are flooded with below error, a solution/fix would be helpful. ERROR [qtp-ambari-client-2103] ClusterImpl:2145 - Config inconsistency exists: unknown configType=solr-env Thanks Mayank
... View more
Labels:
- Labels:
-
Apache Ambari
01-25-2016
04:37 PM
1 Kudo
@stevel thanks for the explanation upcoming Kerby, we had the same error message which you have already documented however a different cause 'may be unsupported cache type" the error was No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt we commented out "default_ccache_name = KEYRING:persistent:%{uid}" and it fixed. Thanks
Mayank
... View more
01-22-2016
05:31 PM
Thanks @Ali Bajwa, posted as article
... View more