Member since
02-27-2023
34
Posts
3
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
127 | 05-09-2023 03:20 AM | |
115 | 05-09-2023 03:16 AM | |
347 | 03-30-2023 10:41 PM | |
878 | 03-30-2023 07:25 PM |
05-12-2023
02:30 AM
Anyone tried that before?
... View more
05-09-2023
03:20 AM
I finally managed to solve the problem by switching the OpenJDK version from 8u362 to 8u232. However, is OpenJDK 8u232 the only version that support when working on CDP 7.1.8 with Kerberos enabled? Could someone clarify it for me please? Thank you.
... View more
05-09-2023
03:16 AM
Thank you for your help @kolli_sandeep . Finally I made an API call to interrupt the Parcels activation, then remove the corresponding Parcels file and the ./flood folder in /opt/cloudera/parcels. The parcels can successfully redistribute and activate after that.
... View more
05-09-2023
03:07 AM
Hi all, I have a CDP 7.1.8 cluster running in my environment. Recently, I would like to try enabling CM server High Availability. I studied the instruction in this link https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/managing-clusters/topics/cm-ha-configure-steps.html. My question is, currently I have tried migrating the CM server to another host and I cannot see my cluster on the new CM server console until I manually restore the cluster. I really doubt that after I enabled CM server HA, when the system failover to the passive CM server, I will not see my cluster as well. Could someone clarify for me please? Thank you very much.
... View more
Labels:
05-09-2023
02:55 AM
Hi all, I have a CDP 7.1.8 cluster running in my environment. Recently I would like to migrate the existing CM server to another host. I followed the instruction in the link https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/managing-clusters/topics/cm-moving-the-cm-server-new-host2.html. After that, the CM server is successfully migrated to the target host and I can see the cluster node recognized by the new CM server. However, I cannot see the cluster appear on the new CM server console. Finally, I have to manually restore the cluster using the link https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/configuring-clusters/topics/cm-api-import-configuration.html. My question is, am I suppose to restore the cluster manually? What I expected is I can see the cluster immediately without extra work done after switching CM server. Could someone clarify for me please? Thank you.
... View more
Labels:
05-03-2023
12:06 AM
From the system log, I found error saying unsupported Keytype. I am using OpenJDK 8u332. Do anyone know how to solve this? Thank you.
... View more
05-02-2023
03:22 AM
Hi all, I have a cluster in CDP version 7.1.8 private cloud base. The Cloudera Management service show bad health and indicate that the connection to KDC server is not available. I can regenerate missing credentials under Administration > Security > Kerberos Credentials Moreover, I can do kinit in the CM server host. Therefore I have no idea why the connection shows failure. Please let me know if I should provide further information. Thank you.
... View more
Labels:
- Labels:
-
Cloudera Data Platform (CDP)
-
Kerberos
05-02-2023
03:13 AM
Hi all, I have a cluster in CDP 7.1.8 private cloud based. Today I tried migrating CM server and following the instruction in https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/managing-clusters/topics/cm-moving-the-cm-server-new-host2.html After I manually restore the cluster following the instruction in https://docs.cloudera.com/cdp-private-cloud-base/7.1.3/configuring-clusters/topics/cm-api-import-configuration.html, the cluster cannot restart. Found that in the Parcel page in CM Console, the parcels are activating but keep on loading forever. Here are the log message related to parcel from cloudera-scm-agent.log: [02/May/2023 03:29:55 -0400] 6258 MainThread parcel ERROR Error while attempting to modify permissions of file '/opt/ cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hadoop-0.20-mapreduce/sbin/Linux/task-controller'.
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/parcel.py", line 586, in ensure_permissions
file = cmf.util.validate_and_open_fd(path, self.get_parcel_home(parcel))
OSError: [Errno 2] No such file or directory: '/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hadoop-0.20-mapreduc e/sbin/Linux/task-controller'
[02/May/2023 03:29:55 -0400] 6258 MainThread downloader INFO Downloader path: /opt/cloudera/parcel-cache
[02/May/2023 03:29:55 -0400] 6258 MainThread parcel_cache INFO Using /opt/cloudera/parcel-cache for parcel cache
[02/May/2023 03:29:55 -0400] 6258 MainThread throttling_logger WARNING Failed parsing alternatives line: sqoop-export string index out of range link currently points to /opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/bin/sqoop-export
[02/May/2023 03:29:59 -0400] 6258 MainThread parcel_cache INFO Deleting unmanaged parcel CDH-7.1.8-1.cdh7.1.8.p0.30990532
[02/May/2023 03:30:40 -0400] 6258 MainThread parcel_cache INFO Deleting unmanaged parcel SPARK3-3.3.0.3.3.7180.0-274-1.p0. 31212967
[02/May/2023 03:30:40 -0400] 6258 MainThread parcel INFO prepare_environment begin: {}, [], []
[02/May/2023 03:30:40 -0400] 6258 MainThread parcel INFO No parcels activated for use
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:44:43 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:44:43 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 04:57:36 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {}, [], []
[02/May/2023 04:57:36 -0400] 6258 __run_queue parcel INFO No parcels activated for use
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:05:21 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:05:21 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:30:15 -0400] 28744 MainThread agent INFO To override these variables, use /etc/cloudera-scm-agent/c onfig.ini. Environment variables for CDH locations are not used when CDH is installed from parcels.
[02/May/2023 05:30:19 -0400] 28744 MainThread agent INFO Previously active parcels: {}
[02/May/2023 05:30:19 -0400] 28744 MainThread agent INFO Using parcels directory from server provided value: /opt/c loudera/parcels
[02/May/2023 05:30:19 -0400] 28744 MainThread parcel INFO Agent does create users/groups
[02/May/2023 05:30:19 -0400] 28744 MainThread parcel INFO Agent does parcel permissions
[02/May/2023 05:30:19 -0400] 28744 MainThread downloader INFO Downloader path: /opt/cloudera/parcel-cache
[02/May/2023 05:30:19 -0400] 28744 MainThread parcel_cache INFO Using /opt/cloudera/parcel-cache for parcel cache
[02/May/2023 05:30:19 -0400] 28744 MainThread throttling_logger WARNING Failed parsing alternatives line: sqoop-export string index out of range link currently points to /opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/bin/sqoop-export
[02/May/2023 05:31:59 -0400] 29223 MainThread agent INFO To override these variables, use /etc/cloudera-scm-agent/c onfig.ini. Environment variables for CDH locations are not used when CDH is installed from parcels.
[02/May/2023 05:32:03 -0400] 29223 MainThread agent INFO Previously active parcels: {}
[02/May/2023 05:32:03 -0400] 29223 MainThread agent INFO Using parcels directory from server provided value: /opt/c loudera/parcels Please let me know if I need to provide any further information. Thank you.
... View more
Labels:
04-26-2023
01:39 AM
Hi all, I am configuring Hue and Impala to authenticate using LDAP. LDAP user can successfully login to Hue UI and access Impala using Impala shell through Impala load balancer. When logged in as LDAP user, the LDAP user can run basic hive query like "show database" However, when I turn to Impala session, it shows errors and the LDAP user failed to run Impala query. Here is my configuration in Impala related to LDAP Here is the configuration in Hue especially for Impala load balancer By the way, my CDP cluster enabled Kerberos authentication. Please help me out with this issue and feel free to tell me if I need to provide more information. Thank you in advanced.
... View more
Labels:
04-03-2023
01:45 AM
Hi all, I am practicing Kafak on my CDP 7.1.8 with Kerberos enabled. I can create topics under Kerberos authentication. However, when I test producing and consuming message, the consumer side never receive a message. Here are some screenshot: Consumer: kafka-console-consumer --bootstrap-server host2.my.cloudera.lab:9092 --topic topic001 --from-beginning --cons umer.config /root/kafka/krb-client.properties
23/04/03 04:37:00 INFO utils.Log4jControllerRegistration$: [main]: Registered kafka:type=kafka.Log4jController MBean
23/04/03 04:37:01 INFO consumer.ConsumerConfig: [main]: ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [host2.my.cloudera.lab:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = console-consumer
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = console-consumer-82044
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients .consumer.CooperativeStickyAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = [hidden]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
security.protocol = SASL_PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 45000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
23/04/03 04:37:01 INFO authenticator.AbstractLogin: [main]: Successfully logged in.
23/04/03 04:37:01 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT refresh thread sta rted.
23/04/03 04:37:01 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT valid starting at: 2023-04-03T02:52:45.000-0400
23/04/03 04:37:01 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT expires: 2023-04-0 4T02:52:45.000-0400
23/04/03 04:37:01 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT refresh sleeping u ntil: 2023-04-03T22:11:03.897-0400
23/04/03 04:37:01 INFO utils.AppInfoParser: [main]: Kafka version: 3.1.1.7.1.8.0-801
23/04/03 04:37:01 INFO utils.AppInfoParser: [main]: Kafka commitId: 15839ba4eb998a33
23/04/03 04:37:01 INFO utils.AppInfoParser: [main]: Kafka startTimeMs: 1680511021242
23/04/03 04:37:01 INFO consumer.KafkaConsumer: [main]: [Consumer clientId=console-consumer, groupId=console-consumer-82044] S ubscribed to topic(s): topic001
23/04/03 04:37:01 INFO clients.Metadata: [main]: [Consumer clientId=console-consumer, groupId=console-consumer-82044] Resetti ng the last seen epoch of partition topic001-0 to 0 since the associated topicId changed from null to MyVuTpA9Tfayosq_QihlwA
23/04/03 04:37:01 INFO clients.Metadata: [main]: [Consumer clientId=console-consumer, groupId=console-consumer-82044] Cluster ID: 7vkx3ceERrKii_vcW_gViQ Producer: [root@host1 ~]# kafka-console-producer --broker-list host1.my.cloudera.lab:9092 host2.my.cloudera.lab:9092 --topic topic001 -- producer.config /root/kafka/krb-client.properties
23/04/03 04:37:44 INFO utils.Log4jControllerRegistration$: [main]: Registered kafka:type=kafka.Log4jController MBean
23/04/03 04:37:44 INFO producer.ProducerConfig: [main]: ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [host1.my.cloudera.lab:9092]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = console-producer
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 1000
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 1500
retries = 3
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = [hidden]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
security.protocol = SASL_PLAINTEXT
security.providers = null
send.buffer.bytes = 102400
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
23/04/03 04:37:44 INFO producer.KafkaProducer: [main]: [Producer clientId=console-producer] Instantiated an idempotent produc er.
23/04/03 04:37:44 INFO authenticator.AbstractLogin: [main]: Successfully logged in.
23/04/03 04:37:44 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT refresh thread sta rted.
23/04/03 04:37:44 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT valid starting at: 2023-04-03T02:52:45.000-0400
23/04/03 04:37:44 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT expires: 2023-04-0 4T02:52:45.000-0400
23/04/03 04:37:44 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT refresh sleeping u ntil: 2023-04-03T23:06:05.063-0400
23/04/03 04:37:44 INFO utils.AppInfoParser: [main]: Kafka version: 3.1.1.7.1.8.0-801
23/04/03 04:37:44 INFO utils.AppInfoParser: [main]: Kafka commitId: 15839ba4eb998a33
23/04/03 04:37:44 INFO utils.AppInfoParser: [main]: Kafka startTimeMs: 1680511064283
>23/04/03 04:37:44 INFO clients.Metadata: [kafka-producer-network-thread | console-producer]: [Producer clientId=console-prod ucer] Cluster ID: 7vkx3ceERrKii_vcW_gViQ
23/04/03 04:37:44 INFO internals.TransactionManager: [kafka-producer-network-thread | console-producer]: [Producer clientId=c onsole-producer] ProducerId set to 5 with epoch 0
23/04/03 04:37:48 INFO clients.Metadata: [kafka-producer-network-thread | console-producer]: [Producer clientId=console-produ cer] Resetting the last seen epoch of partition topic001-0 to 0 since the associated topicId changed from null to MyVuTpA9Tfay osq_QihlwA
>
>hello
>world Please help me out with this issue and feel free to tell me if I need to provide more information. Thank you.
... View more
Labels: