Member since
05-25-2018
77
Posts
2
Kudos Received
0
Solutions
05-27-2018
09:09 PM
Hi Guna, who should be the owner of the below hdfs directory ? yarn.nodemanager.remote-app-log-dir=/tmp/logs What hdfs chmod permission should we set on above directory Right now i am getting below error [root@node5.dataquest.reno.com 37884-yarn-NODEMANAGER]# yarn logs -applicationId application_152612073594_15218 18/05/26 13:27:05 INFO client.RMProxy: Connecting to ResourceManager at node5.dataquest.reno.com/192.188.7.8:8032 /tmp/logs/yarn/logs/application_152612073594_15218 does not exist. Log aggregation has not completed or is not enabled. We are CDH 5.13 Regards JJ
... View more
05-25-2018
08:34 PM
Hi , I am getting below error when i ran the below command 1st question bash-4.1$ yarn logs -applicationId application_55261206146721_39976 18/05/23 23:10:20 INFO client.RMProxy: Connecting to ResourceManager at xxxxxxxxxxxxxxxxxxxxxxxxx /tmp/logs/yarn/logs/ application_55261206146721_39976 does not exist. Log aggregation has not completed or is not enabled. i already enabled yarn.log-aggregation-enable=true yarn.nodemanager.remote-app-log-dir=/tmp/logs yarn.log-aggregation.retain-seconds=3 days 2nd question: does below command syntax supported in cloudera? yarn logs -applicationId application_332332323073474_0002 -show_application_log_info yarn logs -applicationId application_1 332332323073474_0002 -show_container_log_info Regards Jacqueline
... View more
Labels:
- Labels:
-
Apache YARN
02-19-2018
05:15 PM
Thank you very much, Harald, for addressing my questions Regards JJ
... View more
02-18-2018
09:47 AM
Hi Guru, Can you please clarify few Kafka architecture question. Please answer here rather than pointing to links ( which I already did and could not understand) I just want to understand where Kafka partition structure is created in Kafka, "FIRST"? i) Was it created in memory or ii) on disk in log.dirs location 2) do consumers read the partition, that are stored in memory or from disk? 3) some of the links in google search says "Kafka purges the messages as per the retention policies --- Regardless of whether the messages has been consumed". Does this mean that consumer reads the topics from disk only and not from memory? 4) what is the relation among batch.size vs log.flush.interval.messages vs log.segment.bytes ? 4a) https://community.hortonworks.com/articles/80813/kafka-best-practices-1.html links say, Kafka first writes data immediately to files, as soon as Log.flush.interval.messages number of messages got received. Question is where this file is created, in memory or on disk in which location? 4b) when the log file reaches log.segment.bytes, it flushed the log file to disk. Question is in first place where this log file is first created in memory or any other temporary location? Thanks JJ
... View more
Labels:
- Labels:
-
Apache Kafka
02-18-2018
04:16 AM
Can someone distinguish yarn container vs yarn Child Regards JJ
... View more
Labels:
- Labels:
-
Apache YARN
02-17-2018
07:39 PM
When we have Kafka and Spark streaming setup. This is a continuous job, running 24X7. How can we configure that Kerberos token to never expiring, for continuous never-ending streaming jobs? What configuration parameter needs to set in KDC and in spark job definition ( for making the token never expire)
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Spark
-
Kerberos
12-28-2017
12:06 AM
1 Kudo
Hi Pankaj, I implemented your suggested steps. Still no luck. Still, Ambari Metric Collector is going down with below errors 2017-12-27 22:30:21,221 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor: HBaseAccessor getConnection failed after 10 attempts 2017-12-27 22:30:21,221 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor: Error creating Metrics Schema in HBase using Phoenix. org.apache.phoenix.exception.PhoenixIOException: SYSTEM.CATALOG _____________________________ ... 31 more 2017-12-27 22:30:21,222 INFO org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer failed in state INITED; cause: org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricsSystemInitializationException: Error creating Metrics Schema in HBase using Phoenix. org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricsSystemInitializationException: Error creating Metrics Schema in HBase using Phoenix. ______________________________ 2017-12-27 22:30:21,220 WARN org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultPhoenixDataSource: Unable to connect to HBase store using Phoenix. org.apache.phoenix.exception.PhoenixIOException: SYSTEM.CATALOG Regards JJ
... View more
12-23-2017
02:08 PM
Hi Gurus. Our Ambari Metrics collector goes down. Our is Kerberos env. Ambari Version is :Version2.4.2.0 i see below errors in ambari-metrics-collector.log i see belwo error FATAL org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer: Error starting ApplicationHistoryServer org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricsSystemInitializationException: Error creating Metrics Schema in HBase using Phoenix. Caused by: org.apache.phoenix.exception.PhoenixIOException: SYSTEM.CATALOG Caused by: org.apache.hadoop.hbase.TableNotFoundException: SYSTEM.CATALOG in hbase-ams-master-xxxxxxxxxx.log 2017-12-23 07:40:57,717 ERROR [xxxxxx,61300,1513971592600_ChoreService_1] master.BackupLogCleaner: Failed to get hbase:backup table, therefore will keep all files org.apache.hadoop.hbase.TableNotFoundException: hbase:backup I implemented below procedure but no luck https://community.hortonworks.com/questions/73577/problem-with-ambari-metrics.html 1) Turn on Maintenance mode 2) Stop Ambari Metrics 3) hadoop fs -rmr /ams/hbase/* 4) rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/* 5) [zk: localhost:2181(CONNECTED) 0] ls / [zk: localhost:2181(CONNECTED) 1] rmr /ams-hbase-unsecure 6) Start Ambari Metrics 7) Turn off Maintenance mode My setting are : hbase.rootdir = hdfs://hpdprod:8020/user/ams/hbase hbase.cluster.distributed=true timeline.metrics.service.operation.mode=distributed Any help is really appreciate Thanks JJ
... View more
- Tags:
- ambari-metrics
Labels:
- Labels:
-
Apache Ambari
10-31-2017
03:09 PM
Hi Raju, I tried both of your options . Still it did not work. Having problem only with ranger ldaptool. Unix level ldapsearch utility works fine. Regards JJ
... View more
10-31-2017
03:55 AM
unix ldapsearch workfine ,but ranger ldaptool is failing below ldapsearch works fine : ldapsearch -h free-ipa-dev-01.uat.txdc.datastax.com -x -b "dc=txdc,dc=datastax,dc=com" -W hadoopadmin but ranger ldaptool is failing : [root@dev-rng-001 ~]# cd /usr/hdp/current/ranger-usersync/ldaptool [root@dev-rng-001 ldaptool]# ./run.sh -d users Ldap url [ldap://ldap.example.com:389]: ldaps://free-ipa-dev-01.uat.txdc.datastax.com:636 Bind DN [cn=admin,ou=users,dc=example,dc=com]: hadoopadmin Bind Password: User Search Base [ou=users,dc=example,dc=com]: dc=txdc,dc=datastax,dc=com User Search Filter [cn=user1]: cn=* Reading ldap properties from input.properties ERROR: Failed to perfom ldap bind. Please verify values for ranger.usersync.ldap.binddn and ranger.usersync.ldap.ldapbindpassword javax.naming.CommunicationException: simple bind failed: free-ipa-dev-01.uat.txdc.datastax.com:636 [Root exception is javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target] Can you please help Regards JJ
... View more
Labels:
- Labels:
-
Apache Ranger
10-01-2017
07:50 PM
Yes my log.dirs has 4 comma separated directories. Out of 4, only one directory is getting used. How can we make sure all the 4 directories are equally be used. Right now I have 3 brokers and my topic has 3 partitions. 3 partitions are spread across 3 brokers. my question is why all the directories were not getting filled ( why only one ). Does the number of partitions of a topic should depend on the number of directories in log.dirs? Regards JJ
... View more
09-30-2017
11:48 PM
Right now we have only one topic , with 3 partitions and with a replicaiton factor of 3. My log.dirs has 4 mount point or paths or directories. Only 1 mount point is being used out of 4 . Other 3 mount point or paths or directories , mentioned in log.dirs are not been used. How can i make all the mount / directories , get used. Thanks JJ
... View more
Labels:
- Labels:
-
Apache Kafka
09-21-2017
05:05 PM
Thank Sonu, it helps me alot . keep up this spirit
... View more
09-21-2017
05:04 PM
Thank you very much Geoffrey, for your insights and supporting the community.
... View more
09-20-2017
07:38 PM
In our Stack we installed with HDFS and Yarn with version 2.7.1.2.5. Do we still need to install MapReduce2 ( which as two components like History Server and MapReduce2 Clients) . If so, On what nodes do we need to install MapReduce2 Clients ( like only on Data nodes or yarn hosts or MapReduce2 host only) Regards JJ
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
09-14-2017
07:25 AM
Hi , Can you please give me the steps for "curl call from Ranger-Usersync host to Ranger-Admin LB url" Thanks JJ
... View more
09-14-2017
07:24 AM
Hi , When we enabled Kerberos, Principals and keytabs were created for only 1 Ranger admin host . No principals and keytabs were created for other ranger admin host and LB. Do we need to create them using the kadmin utility , on missing hosts? Thanks JJ
... View more
09-12-2017
05:29 PM
Hi , policymgr_external_url parameter ( External URL ) , when pointed the load balancer vip , Ranger sink with Active Directory isnot happening. We have successfully setup Ranger Admin HA . External URL was pointing to load balancer vip.With this new configuration, Ranger sink with Active Directory is not happening. if we replace the load balance vip with actual host name , able get all the Active Directory user list. is there any problem with setting up load balancer vip ? , i am not getting. http://dev-rag.dataquest.com:6080” is not working for AD user sink. can we have multiple hostname for policymgr_external_url parameter ( External URL ) like“ http://dev-rag-001.dataquest.com,dev-rag-002.dataquest.com:6080” ? please share your thoughts and experiences Regards JJ
... View more
Labels:
- Labels:
-
Apache Ranger
09-11-2017
05:46 PM
Hi Raju Sir, our issue got resolved. We had a vip ( HA for ranger) . policymgr_external_url was set to vip name. That was not resolving correctly. When we change policymgr_external_url to actual hostname , we can see AD a/c coming into Ranger Thanks for all your support and follow-up. Good Work you guys Regards JJ
... View more
09-07-2017
03:14 PM
Good Morning Geoffrey FYI ranger.ldap.group.searchfilter= (member=uid={0},ou=Users,dc=dev,dc=dataquest,dc=com) ranger.ldap.group.searchbase= dc=dev,dc=dataquest,dc=com ********Rest of my configuration is as below ****** ---- common configs ----- ranger.usersync.source.impl.class = LDAP/AD ranger.usersync.source.impl.class = org.apache.ranger.ldapusersync.process.LdapUserGroupBuilder ranger.usersync.ldap.url=ldaps://ad.dev.dataquest.com:636
ranger.usersync.ldap.binddn=ad-auth
ranger.usersync.ldap.ldapbindpassword=xxxxxxxxxxxx ------- user configs ranger.usersync.ldap.user.searchbase = dc=dev,dc=dataquest,dc=com
ranger.usersync.ldap.user.searchfilter = (objectcategory=person)
ranger.usersync.ldap.user.searchscope = sub ranger.usersync.ldap.user.objectclass = person ranger.usersync.ldap.user.nameattribute = sAMAccountName
ranger.usersync.ldap.user.groupnameattribute = memberof,ismemberof ------------------ group configs --------------------------------------------------
ranger.usersync.group.searchbase = dc=dev,dc=dataquest,dc=com
ranger.usersync.group.searchfilter = ou=core,dc=dev,dc=dataquest,dc=com
ranger.usersync.group.objectclass = groupofnames
ranger.usersync.group.nameattribute = distinguishedName
ranger.usersync.group.memberattributename = member
ranger.usersync.group.searchenabled = true
ranger.usersync.group.search.first.enabled=false ---------------------
ranger ——> advance —— LDAP settings ranger.ldap.base.dn = dc=dev,dc=dataquest,dc=com ranger.ldap.bind.dn =ad-auth ranger.ldap.bind.password=xxxxxxxx ranger.ldap.group.roleattribute = uid ranger.ldap.referral = ignore ranger.ldap.url = ldaps://ad.dev.dataquest.com:636 ranger.ldap.user.dnpattern = cn=ldapadmin,ou=Users,dc=dev,dc=dataquest,dc=com
ranger.ldap.user.searchfilter =(uid={0}) ranger.usersync.ldap.referral = follow ranger.ldap.user.dnpattern= cn=ldapadmin,ou=Users,dc=dev,dc=dataquest,dc=com
ranger.ldap.group.roleattribute= uid “Advanced ranger-admin-site” and set below properties ranger.ldap.group.searchfilter= (member=uid={0},ou=Users,dc=dev,dc=dataquest,dc=com)
ranger.ldap.group.searchbase= dc=dev,dc=dataquest,dc=com Go to “Advanced ranger-ugsync-site” and set below properties -
ranger.usersync.ldap.username.caseconversion= none
ranger.usersync.ldap.searchBase=dc=dev,dc=dataquest,dc=com
ranger.usersync.group.searchscope= sub
ranger.usersync.ldap.groupname.caseconversion= none ranger.usersync.ldap.bindalias= testldapalias ranger.usersync.sink.impl.class=org.apache.ranger.unixusersync.process.PolicyMgrUserGroupBuilder ***************
Advanced ranger-tagsync-site
ranger.tagsync.dest.ranger.ssl.config.filename= Advanced ranger-ugsync-site ranger.usersync.truststore.file= ranger.usersync.sleeptimeinmillisbetweensynccycle = 86400000 **************************************** Waiting for your valuable suggestion.
... View more
09-07-2017
06:29 AM
Ranger is not getting updated with all the Active Directory accounts. select * from x_portal_user; ---> does not show Active Directory accounts. ERROR PolicyMgrUserGroupBuilder [UnixUserSyncThread] - Failed to add User Group Info -- Connection reset Kerberos was successfully implemented. After enabling kerberos following values we set for ranger. 1) ranger.usersync.kerberos.keytab : /etc/security/keytabs/rangerusersync.service.keytab 2) ranger.usersync.kerberos.principal : rangerusersync/_HOST@DEV.DATAQUEST.COM 3) ranger.usersync.policymgr.username : rangerusersync But klist for keytab show the principal as : rangerusersync/rng-node1.dev.dataquest.com@DEV.DATAQUEST.COM Note : we do not have any ssl implementation and certificate or keystore adn thruststore is not needed for us. below is the error message from usersync.log ``` 07 Sep 2017 05:02:42 ERROR PolicyMgrUserGroupBuilder [UnixUserSyncThread] - Failed to add User Group Info :
com.sun.jersey.api.client.ClientHandlerException: java.net.SocketException: Connection reset
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:151)
at com.sun.jersey.api.client.Client.handle(Client.java:648)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:680)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:568)
at org.apache.ranger.unixusersync.process.PolicyMgrUserGroupBuilder.getUsergroupInfo(PolicyMgrUserGroupBuilder.java:567)
at org.apache.ranger.unixusersync.process.PolicyMgrUserGroupBuilder.access$500(PolicyMgrUserGroupBuilder.java:72)
at org.apache.ranger.unixusersync.process.PolicyMgrUserGroupBuilder$2.run(PolicyMgrUserGroupBuilder.java:539)
at org.apache.ranger.unixusersync.process.PolicyMgrUserGroupBuilder$2.run(PolicyMgrUserGroupBuilder.java:535)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.ranger.unixusersync.process.PolicyMgrUserGroupBuilder.addUserGroupInfo(PolicyMgrUserGroupBuilder.java:535)
at org.apache.ranger.unixusersync.process.PolicyMgrUserGroupBuilder.addOrUpdateUser(PolicyMgrUserGroupBuilder.java:340)
at org.apache.ranger.ldapusersync.process.LdapUserGroupBuilder.updateSink(LdapUserGroupBuilder.java:327)
at org.apache.ranger.usergroupsync.UserGroupSync.run(UserGroupSync.java:58)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:706)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:249)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
... 15 more
07 Sep 2017 05:02:42 INFO LdapUserGroupBuilder [UnixUserSyncThread] - groupSearch is enabled, would search for groups and compute memberships
07 Sep 2017 05:02:43 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LDAPUserGroupBuilder.getGroups() completed with group count: 0 ``` Any help is really appreciated.
... View more
Labels:
- Labels:
-
Apache Ranger
09-07-2017
05:00 AM
Thank Jay for your perfect solution.
... View more
08-20-2017
05:42 AM
Hi , I am using mysql as the database for Ambari Version2.4.2.0. Would like to change Ambari database user password. From mysql we can change the password. That is easy. Question is how to update the 1) -- server.jdbc.rca.user.passwd=${alias=ambari.db.password} 2) -- server.jdbc.user.passwd=${alias=ambari.db.password} in the file /etc/ambari-server/conf/ambari.properties Note, i am not trying to change Ambari admin password, which is used to login to Ambari UI Please help Regards JJ
... View more
Labels:
- Labels:
-
Apache Ambari
08-05-2017
03:51 PM
Hi Experts, Can i more the 1 Active Name Node in HDP Cluster? . Prefer to have 1 Stand by Name node for multiple Active Name Node. Does this configuration is possible? Thanks JJ
... View more
Labels:
- Labels:
-
Apache Hadoop
07-26-2017
08:58 PM
Hi , How can we modify below properties to new location hadoop_pid_dir_prefix yarn_pid_dir_prefix hive_pid_dir hcat_pid_dir All these parameters are grayed out and does not let me to modify. I want to modify their exisitng configuration to my custom folder / directory. Thanks JJ
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Apache YARN
07-24-2017
11:46 PM
how can we modify the below ranger parameters. •ranger_admin_log_dir •ranger_pid_dir •ranger_usersync_log_dir •ranger.tagsync.logdir Rigth now in ambari these are grayed out and does not let me to modify From the os command prompt ,i tried to modify below two files /usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh /usr/hdp/current/ranger-usersync/ranger-usersync-services.sh example from pidf=/var/run/ranger/rangeradmin.pid to pidf=/var/log/hdp/ranger/run/rangeradmin.pid There after Ranger servers failed to come up Your help is needed
... View more
Labels:
- Labels:
-
Apache Ranger
07-18-2017
06:39 PM
I am using mysql database , but i don't see database user with the name "admin". I see "ambari" user. But from ambari web ui , i am accessing as "admin" user . Do you know why. mysql> select db 'DATABASE', host HOST, user USER from mysql.db where db = 'ambari'; +----------+------+--------+ | DATABASE | HOST | USER | +----------+------+--------+ | ambari | % | ambari | +----------+------+--------+ 1 row in set (0.00 sec)
... View more
07-16-2017
07:57 AM
Jay,
I am not your HDP Sand Box. These are all our production bare metal servers. Any suggest please?
Thanks
JJ
... View more
07-16-2017
07:56 AM
Jay, I am not your HDP Sand Box. These are all our production bare metal servers. Any suggest please? Thanks JJ
... View more
07-15-2017
06:23 AM
Hi , ambari-admin-password-reset : say command not found. How to reset ambari admin password? Do we need to run from any particular directory / path ? we are running Ambari Version 2.4.2.0 I have seen couple of links where they set ambari password from mysql . Is it true? Regards JJ
... View more
Labels:
- Labels:
-
Apache Ambari