Member since
04-08-2019
115
Posts
97
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4106 | 04-16-2016 03:39 AM | |
2168 | 04-14-2016 11:13 AM | |
3776 | 04-13-2016 12:31 PM | |
4774 | 04-08-2016 03:47 AM | |
3722 | 04-07-2016 05:05 PM |
05-05-2022
04:36 PM
As a general statement this is not right by any means. LDAP provides secure and encrypted authentication (encrypted user password and SSL/TLS communication) , together with user/group management. It's only the Hadoop stack does not support this and the two only autentication methods implemented for all the CDP components are the dummy simple auth (described above) and the Kerberos authentication (used in combination with PAM or LDAP for user/group mappings). As an example, nothing less than Knox (the security gateway to HDP or CDP) implements full authenticacion using only LDAP (with TLS), and it only relies on Kerberos to authenticate a single service/proxy user to communicate with the rest of the cluster.
... View more
11-29-2020
12:09 AM
When we restart the JournalNode Quorum the epoch number will change. We usually see that the errors when the JournalNodes are not in sync. Check for the writer epoch on current dir for JournalNode process, which one of the JournalNodes is lacking we can manually copy the files from working JournalNode and it will pick up. This should happen automatically when we restart the JournalNodes, if not then above is the procedure.
... View more
06-14-2020
04:49 PM
How to use this token with https://hdfscli.readthedocs.io/en/latest/api.html#hdfs.client.TokenClient
... View more
04-13-2020
12:25 PM
Hi, can I instead add the following line to spark-defaults.conf file: spark.ui.port 4041 Will that have the same effect ? Thanks
... View more
01-22-2019
11:04 PM
Thanks, @Sandeep Nemuri
... View more
01-04-2018
11:54 AM
Hey everyone, I have a somewhat similar question, which I posted here: https://community.hortonworks.com/questions/155681/how-to-defragment-hdfs-data.html I would really appreciate any ideas. cc @Lester Martin @Jagatheesh Ramakrishnan @rbiswas
... View more
07-28-2016
06:24 AM
2 Kudos
We have noticed production job failures where customer upgraded their hive from .14 (HDP 2.1) to the latest version(>1.2.x) and resulted in critical jobs failing (not to mention the severity 1 case). This is due to the changes in the reserved words between the source and target hive versions. For example, Word 'date' is not a reserved word in Hive.14 but inHive 1.2.1 it is. Same is the case with REGEXP and RLIKE. Here are the reserved keywords which hive.support.sql11.reserved.keywords support. https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-ReservedKeywords There are two ways if the user still would like to use those reserved keywords as identifiers: 1).use quoted identifiers, 2).set hive.support.sql11.reserved.keywords=false. This would be the best option though code change is required. To illustrate option 1, For example, With word "user" being a keyword, We can use it as an identifier like the below SQL. SELECT createddate, `user`.screenname FROM twitter_json4 WHERE `user`.name LIKE 'Sarah%'; The second option is quite easier to write queries. However during the upgrade or If we need to enable the hive.support.sql11.reserved.keywords to true for some reason, then the existing queries(without using quotes) hive throws the following error FailedPredicateException(identifier,{useSQL11ReservedKeywordsForIdentifier()}?) at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.identifier(HiveParser_IdentifiersParser.java:11644) at org.apache.hadoop.hive.ql.parse.HiveParser.identifier(HiveParser.java:45920) OPTION 2: hive> set hive.support.sql11.reserved.keywords; hive.support.sql11.reserved.keywords=false create table table (user string); ==> table and user are keywords. OK Time taken: 1.458 seconds hive> desc table; OK user string Time taken: 0.34 seconds, Fetched: 1 row(s) hive> show tables; OK table Time taken: 0.075 seconds, Fetched: 1 row(s) hive> set hive.support.sql11.reserved.keywords=true; ===> Enabling the property. hive> show tables; OK table Time taken: 0.041 seconds, Fetched: 1 row(s) hive> show tables; OK table Time taken: 0.039 seconds, Fetched: 1 row(s) hive> describe table; FailedPredicateException(identifier,{useSQL11ReservedKeywordsForIdentifier()}?) at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.identifier(HiveParser_IdentifiersParser.java:11644) at org.apache.hadoop.hive.ql.parse.HiveParser.identifier(HiveParser.java:45920) at org.apache.hadoop.hive.ql.parse.HiveParser.tabTypeExpr(HiveParser.java:15574) Setting hive.support.sql11.reserved.keywords to false would allow the user to use the keywords as identifiers without hive throwing any exception. We need to be considerate of the fact that when enabling the hive.support.sql11.reserved.keywords to true would require the use of quotes to differentiate the keyword and identifier. Feel free to get in touch with Hortonworks Support incase of any issues.
... View more
05-06-2016
02:49 PM
For the Hue example, you have to create a Hue user in AD and then create a keytab file for Hue. This is where I am stuck. How do you create the keytab file for the Hue user?
... View more
04-12-2016
02:14 PM
4 Kudos
***Take the backup of complete RANGER database before you perform any operation. This can happen if you encounter any issues during the upgrade or required java patches are partially applied 1) Stop Ranger from ambari ui 2) Take fresh db dump 3) Login to the ranger database, review x_policy table and make sure all policies are available (If policies are missing, you may have to restore database from the backup taken before the upgrade and simply restart ranger service) mysql> use ranger;
mysql> select * from x_policy; 4) Delete all java patches from x_db_version_h table mysql> select * from x_db_version_h;
mysql> create table x_db_version_h_backup as select * from x_db_version_h;
mysql> delete from x_db_version_h where version like 'J%'; 5) Start ranger service from ambari ui This would take couple of minutes to re-apply all deleted patches. Once done, plugins will start downloading policies 6) Login to Ranger UI and verify whether policies are visible and plugins are in sync
... View more
Labels:
04-11-2016
01:56 PM
1 Kudo
"Error executing: call add_columns_to_support_audit_log_aggregation(); java.sql.SQLException: Incorrect key file for table 'xa_access_audit'; try to repair it SQLException : SQL state: HY000 java.sql.SQLException: Incorrect key file for table 'xa_access_audit'; try to repair it ErrorCode: 1034 2016-04-11 06:05:59,187 [E] 015-auditlogaggregation.sql import failed!" SOLUTION: Make sure you have enough space in the /tmp directory and take the backup of 'xa_access_audit' as below. Login to MySQL. use ranger_audit; create table xa_access_audit_backup as select * from 'xa_access_audit' ; truncate table xa_access_audit_backup; and retry the upgrade.
... View more
Labels: