Member since
02-25-2016
23
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
746 | 07-24-2018 02:30 PM | |
755 | 02-14-2017 01:55 PM |
09-10-2018
02:39 PM
In my hadoop cluster OS, Ranger and Kerberos is integrated with an external AD. id <username> and hdfs groups <username> both show group to which user belongs.
... View more
09-10-2018
02:26 PM
Hi Sriram I was able to do this by adding the following parameters in Custom core-site.xml in HDFS through Ambari: Please change the figures as per the environment. hadoop.security.group.mapping=org.apache.hadoop.security.CompositeGroupsMapping hadoop.security.group.mapping.provider.ad4users=org.apache.hadoop.security.LdapGroupsMapping hadoop.security.group.mapping.provider.ad4users.ldap.base=dc=csmodule,dc=com hadoop.security.group.mapping.provider.ad4users.ldap.bind.user=cn=username,OU=Users,DC=hortonworks,DC=com hadoop.security.group.mapping.provider.ad4users.ldap.bind.password=password hadoop.security.group.mapping.provider.ad4users.ldap.search.attr.group.name=cn hadoop.security.group.mapping.provider.ad4users.ldap.search.attr.member=member hadoop.security.group.mapping.provider.ad4users.ldap.search.filter.group=(objectclass=group) hadoop.security.group.mapping.provider.ad4users.ldap.search.filter.user=(&(|(objectclass=person)(objectclass=applicationProcess))(sAMAccountName={0})) hadoop.security.group.mapping.provider.ad4users.ldap.url=ldap-url:389 hadoop.security.group.mapping.provider.shell4services=org.apache.hadoop.security.ShellBasedUnixGroupsMapping hadoop.security.group.mapping.providers=ad4users,shell4services hadoop.security.group.mapping.providers.combined=true Reference: https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-common-project/hadoop-common/src/site/markdown/GroupsMapping.md#Composite_Groups_Mapping Please accept my answer if you found this helpful.
... View more
08-29-2018
06:38 PM
Hi @lam
rab
Is the issue resolved? If yes, please let me know how it was done. Else, Is your cluster kerberized? Can you also add the hbase logs inside metrics collector? Few attempts which i tried: In ambari, goto the host where Metric collector is installed and refresh the configs and try again to restart metrics collector. The issues which i have faced till, the issue is due to either the values stored in zkClient or something wrong in metric collector files stored on the hosts. If you don't need the previous metrics stored, you can follow the below steps "at your own risk" 1. Stop all the services of metric collector, metric monitor and grafana. 2. Delete the service. 3. Rename/Delete the folder ambari-metrics-collector at path /var/log/var/lib/ and /var/var/lib/ 4. Add the service Ambari Metrics from Ambari again. The above worked for me.
... View more
07-24-2018
02:30 PM
Hi I upgraded my cluster yesterday, following the documentations at: https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/bk_ambari-upgrade/content/ambari_upgrade_guide.html Read each and every line, because it is bit different from previous upgrades, few which i noticed: 1. Ambari Metrics System, and SmartSense must be upgraded after first registering the HDP 3.0.x version and installing HDP 3.0.x packages. Please do not attempt to start Ambari Metrics or SmartSense until the HDP 3.0.x upgrade has completed. 2. DO NOT RESTART services after ambari upgrade 3.
If you have configured Ambari to authenticate against an external LDAP or Active Directory, you must re-run ambari-server setup-ldap
If you have configured your cluster for Hive, Ranger or Oozie with an external database (Oracle, MySQL or PostgreSQL), you must re-run ambari-server setup --jdbc-db and --jdbc-driver 4. There is only one method for upgrading HDP-2.6 to HDP-3.0 with Ambari: Express Upgrade. An Express Upgrade orchestrates the HDP upgrade in an order that will incur cluster downtime. 5. KDC Admin Credentials
The Ambari Server will add new components as part of the HDP 2.6 to HDP 3.0 upgrade and needs to be configured to save the KDC admin credentials so necessary principals can be created. I used: https://community.hortonworks.com/articles/42927/adding-kdc-administrator-credentials-to-the-ambari.html 6. Before upgrading the cluster to HDP 3.0 it’s required to prepare Hive for upgrade. The Hive pre-upgrade tool must be run to prepare the cluster.
... View more
04-27-2018
07:51 AM
I don't think the changes has reflected. If you see the table description TTL => '86400 SECONDS (1 DAY). @Kashif Amir does your problem still exists?
... View more
04-24-2018
10:26 AM
Hi Kashif Can you please login to Hbase UI and check if the table shows up there. If yes, can you mention the table Description here. Also, what command did you used to changed the TTL?
... View more
04-01-2018
05:27 PM
Hi Were you able to find the solution? I am facing the same issue. When i run the query from hive shell or zeppelin as a hive user, then the query works fine. But if i run it with other user, sometime it works and most of the times i get the Vertex error. Thanks & Regards
... View more
10-23-2017
02:40 PM
@pshah Can you please help? I am still facing the same issue. I also noticed one thing, when i try to connect zookeeper on kafka server from kerberized cluster without taking any ticket, I am able to connect, with below logs: p.p1 {margin: 0.0px 0.0px 10.0px 0.0px; line-height: 18.0px; font: 15.0px Arial; color: #404041; -webkit-text-stroke: #404041}
span.s1 {font-kerning: none} WARN [main-SendThread(kafka01.nix.xyz.com:2181):ZooKeeperSaslClient$ClientCallbackHandler@496] - Could not login: the client is being asked for a password, but the Zookeeper client code does not currently support obtaining a password from the user. Make sure that the client is configured to use a ticket cache (using the JAAS configuration setting 'useTicketCache=true)' and restart the client. If you still get this message after that, the TGT in the ticket cache has expired and must be manually refreshed. To do so, first determine if you are using a password or a keytab. If the former, run kinit in a Unix shell in the environment of the user who is running this Zookeeper client using the command 'kinit <princ>' (where <princ> is the name of the client's Kerberos principal). If the latter, do 'kinit -k -t <keytab> <princ>' (where <princ> is the name of the Kerberos principal, and <keytab> is the location of the keytab file). After manually refreshing your cache, restart this client. If you continue to see this message after manually refreshing your cache, ensure that your KDC host's clock is in sync with this host's clock. 2017-10-23 15:56:40,982 - WARN [main-SendThread(kafka01.nix.xyz.com:2181):ClientCnxn$SendThread@1001] - SASL configuration failed: javax.security.auth.login.LoginException: No password provided Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-10-23 15:56:40,984 - INFO [main-SendThread(kafka01.nix.xyz.com:2181):ClientCnxn$SendThread@1019] - Opening socket connection to server kafka01.nix.xyz.com/10.72.19.66:2181 But when i take the zookeeper ticket on the same server, then it fails: 2017-10-23 16:01:47,877 - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=sta-needs01-kafka01 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@4534b60d Welcome to ZooKeeper! JLine support is enabled 2017-10-23 16:01:47,989 - INFO [main-SendThread(kafka01.nix.xyz.com:2181):Login@294] - successfully logged in. 2017-10-23 16:01:47,991 - INFO [Thread-0:Login$1@127] - TGT refresh thread started. 2017-10-23 16:01:47,995 - INFO [main-SendThread(kafka01.nix.xyz.com:2181):ZooKeeperSaslClient$1@289] - Client will use GSSAPI as SASL mechanism. 2017-10-23 16:01:48,013 - INFO [Thread-0:Login@302] - TGT valid starting at: Mon Oct 23 16:01:44 CEST 2017 2017-10-23 16:01:48,013 - INFO [Thread-0:Login@303] - TGT expires: Tue Oct 24 02:01:44 CEST 2017 2017-10-23 16:01:48,013 - INFO [Thread-0:Login$1@181] - TGT refresh sleeping until: Tue Oct 24 00:06:45 CEST 2017 [zk: sta-needs01-kafka01(CONNECTING) 0] 2017-10-23 16:01:48,052 - INFO [main-SendThread(kafka01.nix.xyz.com:2181):ClientCnxn$SendThread@1019] - Opening socket connection to server kafka01.nix.xyz.com/10.72.19.66:2181. Will attempt to SASL-authenticate using Login Context section 'Client' 2017-10-23 16:01:48,058 - INFO [main-SendThread(kafka01.nix.xyz.com:2181):ClientCnxn$SendThread@864] - Socket connection established to kafka01.nix.xyz.com/10.72.19.66:2181, initiating session 2017-10-23 16:01:48,067 - INFO [main-SendThread(kafka01.nix.xyz.com:2181):ClientCnxn$SendThread@1279] - Session establishment complete on server kafka01.nix.xyz.com/10.72.19.66:2181, sessionid = 0x15f496f70e70049, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null 2017-10-23 16:01:48,140 - ERROR [main-SendThread(kafka01.nix.xyz.com:2181):ZooKeeperSaslClient@388] - An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Message stream modified (41))]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state. 2017-10-23 16:01:48,140 - ERROR [main-SendThread(kafka01.nix.xyz.com:2181):ClientCnxn$SendThread@1059] - SASL authentication with Zookeeper Quorum member failed: javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Message stream modified (41))]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state.
... View more
10-19-2017
02:04 PM
@Amardeep Sarkar were you able to find the solution? Even, I am facing the same issue.
... View more
07-18-2017
03:02 PM
1 Kudo
Hi, We are using Storm to ingest data into Hbase through phoenix. Quite often, regionserver crashes. When i restart it, it starts successfully. HDP-2.6.1.0 HBASE:1.1.2 The error in regionserver logs are as below: Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table undefined. tableName=test.game_test at org.apache.phoenix.util.PhoenixRuntime.getTableNoCache(PhoenixRuntime.java:399) at org.apache.phoenix.util.IndexUtil.getPDataTable(IndexUtil.java:749) at org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreFileReaderOpen(IndexHalfStoreFileReaderGenerator.java:151) ... 18 more 2017-07-18 14:07:56,015 WARN[regionserver/xyz.nix.com/10.73.19.44:16020-splits-1500386154270] regionserver.SplitTransaction: Should use rollback(Server, RegionServerServices, User) 2017-07-18 14:07:56,015 FATAL [regionserver/xyz.nix.com/10.73.19.44:16020-splits-1500386154270] regionserver.HRegionServer: ABORTING region server xyz.nix.com,16020,1500386060492: Abort; we got an error after point-of-no-return 2017-07-18 14:07:56,016 FATAL [regionserver/xyz.nix.com/10.73.19.44:16020-splits-1500386154270] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator, org.apache.phoenix.coprocessor.MetaDataEndpointImpl, org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor, org.apache.phoenix.coprocessor.ScanRegionObserver, org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, org.apache.phoenix.hbase.index.Indexer, org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, org.apache.hadoop.hbase.security.token.TokenProvider, org.apache.phoenix.coprocessor.ServerCachingEndpointImpl, org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint]
... View more
Labels:
- Labels:
-
Apache HBase
05-09-2017
08:14 AM
Hi @Greenhorn Techie, Just curious to know how you implemented it?
... View more
04-25-2017
06:26 PM
Hi, Can you please mention the error occurring in the metric collector logs?
... View more
02-14-2017
01:55 PM
Apparently, as i mentioned above it was the cache thing. Today, on a new date, it tried again to create keytabs and it worked fine, since it wasn't looking into the cache. So my assumption is that somehow the cache got deleted yesterday, due to which it was failing.
... View more
02-14-2017
01:44 PM
No, I haven't set up kerberos specially for ambari server. I enabled Kerberos from Ambari and I am using an existing AD server. This service check works on other nodes and it says something about missing cache.
... View more
02-13-2017
06:48 PM
Hi, In a kerberized HDP 2.5 cluster, when I try to run Service check from Ambari, getting the below mentioned error, always for ambari server host(If I try to kinit from putty, it works fine): 13 Feb 2017 19:29:46,088 INFO [ambari-client-thread-231] AmbariManagementControllerImpl:3749 - Received action execution request, clusterName=abc-123, request=isCommand :true, action :null, command :KERBEROS_SERVICE_CHECK, inputs :{}, resourceFilters: [RequestResourceFilter{serviceName='KERBEROS', componentName='null', hostNames=[]}], exclusive: false, clusterName :abc-123 13 Feb 2017 19:29:47,803 INFO [Server Action Executor Worker 4946] KerberosServerAction:352 - Processing identities... 13 Feb 2017 19:29:47,911 INFO [Server Action Executor Worker 4946] KerberosServerAction:456 - Processing identities completed. 13 Feb 2017 19:29:48,963 INFO [Server Action Executor Worker 4947] KerberosServerAction:352 - Processing identities... 13 Feb 2017 19:29:49,036 INFO [Server Action Executor Worker 4947] CreateKeytabFilesServerAction:193 - Creating keytab file for abc-123-021317@REALMNAME.COM on host abc-123-wn004.nix.REALMNAME.COM 13 Feb 2017 19:29:49,037 INFO [Server Action Executor Worker 4947] CreateKeytabFilesServerAction:193 - Creating keytab file for abc-123-021317@REALMNAME.COM on host abc-123-hn01.nix.REALMNAME.COM 13 Feb 2017 19:29:49,038 INFO [Server Action Executor Worker 4947] CreateKeytabFilesServerAction:193 - Creating keytab file for abc-123-021317@REALMNAME.COM on host abc-123-wn002.nix.REALMNAME.COM 13 Feb 2017 19:29:49,049 INFO [Server Action Executor Worker 4947] CreateKeytabFilesServerAction:193 - Creating keytab file for abc-123-021317@REALMNAME.COM on host abc-123-wn006.nix.REALMNAME.COM 13 Feb 2017 19:29:49,049 INFO [Server Action Executor Worker 4947] CreateKeytabFilesServerAction:193 - Creating keytab file for abc-123-021317@REALMNAME.COM on host abc-123-wn003.nix.REALMNAME.COM 13 Feb 2017 19:29:49,050 INFO [Server Action Executor Worker 4947] CreateKeytabFilesServerAction:193 - Creating keytab file for abc-123-021317@REALMNAME.COM on host abc-123-mn01.nix.REALMNAME.COM 13 Feb 2017 19:29:49,051 ERROR [Server Action Executor Worker 4947] CreateKeytabFilesServerAction:233 - Failed to create keytab for abc-123-021317@REALMNAME.COM, missing cached file 13 Feb 2017 19:29:49,052 INFO [Server Action Executor Worker 4947] KerberosServerAction:456 - Processing identities completed. 13 Feb 2017 19:29:49,993 ERROR [ambari-action-scheduler] ActionScheduler:428 - Operation completely failed, aborting request id: 216 I tried to disable and enable Kerberos, but it still fails.
... View more
01-03-2017
04:00 PM
Hi Lester, I am planning to appear for HDPCA very soon, but i read this article: http://www.dataarchitect.cloud/important-changes-coming-to-the-hortonworks-certification-program-hortonworks/ As per this, the existing HDPCA will be changed to HCA ---> Hortonworks Certified Professional Admin --> Hortonworks Certified Expert Admin I am not able to find this information on Hortonworks site, so I am bit confused that whether should i appear for HCA or HDPCA? Regards Saurabh
... View more
12-15-2016
04:14 PM
Yes, that was the issue. I changed the ACL to r instead of cdrwa, which was causing the issue. As soon i changed it back to cdrwa, resourcemanagers started. Thanks a lot 🙂
... View more
12-15-2016
09:31 AM
2 Kudos
After enabling Kerberos on the cluster(upgraded to HDP 2.5), everything was working fine. Then I installed Zeppelin, which asked me to restart few components.
After the restart, both the resourcemanagers are not starting up. 2016-12-15 10:15:08,735 INFO zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
2016-12-15 10:15:08,735 INFO zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.compiler=<NA> 2016-12-15 10:15:08,735 INFO zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.name=Linux
2016-12-15 10:15:08,735 INFO zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.arch=amd64
2016-12-15 10:15:08,735 INFO zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.version=2.6.32-504.8.1.el6.x86_64
2016-12-15 10:15:08,735 INFO zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.name=yarn
2016-12-15 10:15:08,735 INFO zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.home=/home/yarn
2016-12-15 10:15:08,736 INFO zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.dir=/usr/hdp/2.5.0.0-1245/hadoop-yarn
2016-12-15 10:15:08,736 INFO zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=xxx.com:2181,yyy.com
:2181,zzz.com:2181 sessionTimeout=10000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@62ef27a8 2016-12-15 10:15:08,752 INFO zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1019)) - Opening socket connection to server yyy.com/IP:2181. Will not attempt to authenticate using SASL (unknown error) 2016-12-15 10:15:08,757 INFO zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(864)) - Socket connection established to yyy.com/IP:2181, initiating session 2016-12-15 10:15:08,768 INFO zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1279)) - Session establishment complete on server yyy.com/IP:2181, sessionid = 0x3
590197ed680104, negotiated timeout = 10000 2016-12-15 10:15:08,784 INFO service.AbstractService (AbstractService.java:noteFailure(272)) - Service org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService failed in state INITED;
cause: java.io.IOException: Couldn't create /yarn-leader-election
java.io.IOException: Couldn't create /yarn-leader-election
at org.apache.hadoop.ha.ActiveStandbyElector.ensureParentZNode(ActiveStandbyElector.java:350)
at org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.serviceInit(EmbeddedElectorService.java:96)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at org.apache.hadoop.yarn.server.resourcemanager.AdminService.serviceInit(AdminService.java:152)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:281)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1228)
Caused by: org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /yarn-leader-election
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at org.apache.hadoop.ha.ActiveStandbyElector$3.run(ActiveStandbyElector.java:1000)
at org.apache.hadoop.ha.ActiveStandbyElector$3.run(ActiveStandbyElector.java:997)
at org.apache.hadoop.ha.ActiveStandbyElector.zkDoWithRetries(ActiveStandbyElector.java:1041)
at org.apache.hadoop.ha.ActiveStandbyElector.createWithRetries(ActiveStandbyElector.java:997)
at org.apache.hadoop.ha.ActiveStandbyElector.ensureParentZNode(ActiveStandbyElector.java:344)
... 9 more
... View more
Labels:
- Labels:
-
Cloudera Manager