Member since
02-25-2016
23
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1463 | 07-24-2018 02:30 PM | |
1805 | 02-14-2017 01:55 PM |
09-10-2018
02:39 PM
In my hadoop cluster OS, Ranger and Kerberos is integrated with an external AD. id <username> and hdfs groups <username> both show group to which user belongs.
... View more
09-10-2018
02:26 PM
Hi Sriram I was able to do this by adding the following parameters in Custom core-site.xml in HDFS through Ambari: Please change the figures as per the environment. hadoop.security.group.mapping=org.apache.hadoop.security.CompositeGroupsMapping hadoop.security.group.mapping.provider.ad4users=org.apache.hadoop.security.LdapGroupsMapping hadoop.security.group.mapping.provider.ad4users.ldap.base=dc=csmodule,dc=com hadoop.security.group.mapping.provider.ad4users.ldap.bind.user=cn=username,OU=Users,DC=hortonworks,DC=com hadoop.security.group.mapping.provider.ad4users.ldap.bind.password=password hadoop.security.group.mapping.provider.ad4users.ldap.search.attr.group.name=cn hadoop.security.group.mapping.provider.ad4users.ldap.search.attr.member=member hadoop.security.group.mapping.provider.ad4users.ldap.search.filter.group=(objectclass=group) hadoop.security.group.mapping.provider.ad4users.ldap.search.filter.user=(&(|(objectclass=person)(objectclass=applicationProcess))(sAMAccountName={0})) hadoop.security.group.mapping.provider.ad4users.ldap.url=ldap-url:389 hadoop.security.group.mapping.provider.shell4services=org.apache.hadoop.security.ShellBasedUnixGroupsMapping hadoop.security.group.mapping.providers=ad4users,shell4services hadoop.security.group.mapping.providers.combined=true Reference: https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-common-project/hadoop-common/src/site/markdown/GroupsMapping.md#Composite_Groups_Mapping Please accept my answer if you found this helpful.
... View more
08-29-2018
06:38 PM
Hi @lam
rab
Is the issue resolved? If yes, please let me know how it was done. Else, Is your cluster kerberized? Can you also add the hbase logs inside metrics collector? Few attempts which i tried: In ambari, goto the host where Metric collector is installed and refresh the configs and try again to restart metrics collector. The issues which i have faced till, the issue is due to either the values stored in zkClient or something wrong in metric collector files stored on the hosts. If you don't need the previous metrics stored, you can follow the below steps "at your own risk" 1. Stop all the services of metric collector, metric monitor and grafana. 2. Delete the service. 3. Rename/Delete the folder ambari-metrics-collector at path /var/log/var/lib/ and /var/var/lib/ 4. Add the service Ambari Metrics from Ambari again. The above worked for me.
... View more
07-24-2018
02:30 PM
Hi I upgraded my cluster yesterday, following the documentations at: https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/bk_ambari-upgrade/content/ambari_upgrade_guide.html Read each and every line, because it is bit different from previous upgrades, few which i noticed: 1. Ambari Metrics System, and SmartSense must be upgraded after first registering the HDP 3.0.x version and installing HDP 3.0.x packages. Please do not attempt to start Ambari Metrics or SmartSense until the HDP 3.0.x upgrade has completed. 2. DO NOT RESTART services after ambari upgrade 3.
If you have configured Ambari to authenticate against an external LDAP or Active Directory, you must re-run ambari-server setup-ldap
If you have configured your cluster for Hive, Ranger or Oozie with an external database (Oracle, MySQL or PostgreSQL), you must re-run ambari-server setup --jdbc-db and --jdbc-driver 4. There is only one method for upgrading HDP-2.6 to HDP-3.0 with Ambari: Express Upgrade. An Express Upgrade orchestrates the HDP upgrade in an order that will incur cluster downtime. 5. KDC Admin Credentials
The Ambari Server will add new components as part of the HDP 2.6 to HDP 3.0 upgrade and needs to be configured to save the KDC admin credentials so necessary principals can be created. I used: https://community.hortonworks.com/articles/42927/adding-kdc-administrator-credentials-to-the-ambari.html 6. Before upgrading the cluster to HDP 3.0 it’s required to prepare Hive for upgrade. The Hive pre-upgrade tool must be run to prepare the cluster.
... View more
04-27-2018
07:51 AM
I don't think the changes has reflected. If you see the table description TTL => '86400 SECONDS (1 DAY). @Kashif Amir does your problem still exists?
... View more
04-24-2018
10:26 AM
Hi Kashif Can you please login to Hbase UI and check if the table shows up there. If yes, can you mention the table Description here. Also, what command did you used to changed the TTL?
... View more
04-01-2018
05:27 PM
Hi Were you able to find the solution? I am facing the same issue. When i run the query from hive shell or zeppelin as a hive user, then the query works fine. But if i run it with other user, sometime it works and most of the times i get the Vertex error. Thanks & Regards
... View more
07-18-2017
03:02 PM
1 Kudo
Hi, We are using Storm to ingest data into Hbase through phoenix. Quite often, regionserver crashes. When i restart it, it starts successfully. HDP-2.6.1.0 HBASE:1.1.2 The error in regionserver logs are as below: Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table undefined. tableName=test.game_test at org.apache.phoenix.util.PhoenixRuntime.getTableNoCache(PhoenixRuntime.java:399) at org.apache.phoenix.util.IndexUtil.getPDataTable(IndexUtil.java:749) at org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreFileReaderOpen(IndexHalfStoreFileReaderGenerator.java:151) ... 18 more 2017-07-18 14:07:56,015 WARN[regionserver/xyz.nix.com/10.73.19.44:16020-splits-1500386154270] regionserver.SplitTransaction: Should use rollback(Server, RegionServerServices, User) 2017-07-18 14:07:56,015 FATAL [regionserver/xyz.nix.com/10.73.19.44:16020-splits-1500386154270] regionserver.HRegionServer: ABORTING region server xyz.nix.com,16020,1500386060492: Abort; we got an error after point-of-no-return 2017-07-18 14:07:56,016 FATAL [regionserver/xyz.nix.com/10.73.19.44:16020-splits-1500386154270] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator, org.apache.phoenix.coprocessor.MetaDataEndpointImpl, org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor, org.apache.phoenix.coprocessor.ScanRegionObserver, org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, org.apache.phoenix.hbase.index.Indexer, org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, org.apache.hadoop.hbase.security.token.TokenProvider, org.apache.phoenix.coprocessor.ServerCachingEndpointImpl, org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint]
... View more
Labels:
- Labels:
-
Apache HBase
05-09-2017
08:14 AM
Hi @Greenhorn Techie, Just curious to know how you implemented it?
... View more
02-14-2017
01:55 PM
Apparently, as i mentioned above it was the cache thing. Today, on a new date, it tried again to create keytabs and it worked fine, since it wasn't looking into the cache. So my assumption is that somehow the cache got deleted yesterday, due to which it was failing.
... View more