Member since
05-20-2019
17
Posts
0
Kudos Received
0
Solutions
11-30-2021
09:48 AM
@sarm After digging a little bit more, there is a metric exposed by producers called kafka.producer:type=producer-metrics,client-id=producer-1 > objectName record-send-total: This will show the total number of records sent by this producer. To get more details about the available metrics in Kafka I would suggest checking this Cloudera community article.
... View more
09-11-2020
02:43 AM
Hi Abdul, Could you please share the entire Yarn application logs for our analysis (or) the entire stack trace? Thanks AKR
... View more
07-09-2020
02:05 AM
@sarm Minimum heap size should be set to : 4 GB Increase the memory for higher replica counts or a higher number of blocks per DataNode. When increasing the memory, Cloudera recommends an additional 1 GB of memory for every 1 million replicas above 4 million on the DataNodes. For example, 5 million replicas require 5 GB of memory. Set this value using the Java Heap Size of DataNode in Bytes HDFS configuration property. Reference: https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_hardware_requirements.html#concept_fzz_dq4_gbb Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
05-25-2020
11:31 PM
Hello @sarm , what are the impacts of changing service account password in kerberized cluster ? Service accounts like hdfs, hbase, spark, etc. password rely on keytabs. It has principals which look like any other normal user principal but they do rely on having valid keytabs around. If the passwords for these service accounts expire/ change then you will need to re-generate keytabs for them once the password is updated. You can re-generate these keytabs in Ambari by going to the Kerberos screen and pressing the "Regenerate Keytabs" button. This will also automatically distribute the keytabs where they are needed. Note it's always best to restart the cluster when you do this. NOTE:- for better smoothness of this process please try changing password for one service account followed by service restart and observe if any impact is there and then proceed for other Service Accounts. To answer your question, changing the password of the service accounts would not affect the running services since the passwords are not used to start the service. Hence passwords are not required during the service startup or during the life time of process.
... View more
04-13-2020
09:50 PM
Hi @jsensharma , Thanks for your comments .!! I believe i am using lower version of hadoop that is why the reason i am facing the issue Hadoop 2.7.3.2.6.5.0-292 Subversion git@github.com:hortonworks/hadoop.git -r 3091053c59a62c82d82c9f778c48bde5ef0a89a1 Compiled by jenkins on 2018-05-11T07:53Z Compiled with protoc 2.5.0 From source with checksum abed71da5bc89062f6f6711179f2058 This command was run using /usr/hdp/2.6.5.0-292/hadoop/hadoop-common-2.7.3.2.6.5.0-292.jar #javap -cp /usr/hdp/2.6.5.0-292/hadoop-hdfs//hadoop-hdfs-2.7.3.2.6.5.0-292.jar org.apache.hadoop.hdfs.web.resources.PutOpParam.Op Compiled from "PutOpParam.java" public final class org.apache.hadoop.hdfs.web.resources.PutOpParam$Op extends java.lang.Enum<org.apache.hadoop.hdfs.web.resources.PutOpParam$Op> implements org.apache.hadoop.hdfs.web.resources.HttpOpParam$Op { public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op CREATE; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op MKDIRS; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op CREATESYMLINK; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op RENAME; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op SETREPLICATION; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op SETOWNER; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op SETPERMISSION; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op SETTIMES; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op RENEWDELEGATIONTOKEN; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op CANCELDELEGATIONTOKEN; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op MODIFYACLENTRIES; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op REMOVEACLENTRIES; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op REMOVEDEFAULTACL; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op REMOVEACL; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op SETACL; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op SETXATTR; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op REMOVEXATTR; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op CREATESNAPSHOT; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op RENAMESNAPSHOT; public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op NULL; final boolean doOutputAndRedirect; final int expectedHttpResponseCode; final boolean requireAuth; public static org.apache.hadoop.hdfs.web.resources.PutOpParam$Op[] values(); public static org.apache.hadoop.hdfs.web.resources.PutOpParam$Op valueOf(java.lang.String); public org.apache.hadoop.hdfs.web.resources.HttpOpParam$Type getType(); public boolean getRequireAuth(); public boolean getDoOutput(); public boolean getRedirect(); public int getExpectedHttpResponseCode(); public java.lang.String toQueryString(); static {}; }
... View more