Member since
02-27-2023
37
Posts
3
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4989 | 05-09-2023 03:20 AM | |
2533 | 05-09-2023 03:16 AM | |
2381 | 03-30-2023 10:41 PM | |
17909 | 03-30-2023 07:25 PM |
05-02-2023
03:13 AM
Hi all, I have a cluster in CDP 7.1.8 private cloud based. Today I tried migrating CM server and following the instruction in https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/managing-clusters/topics/cm-moving-the-cm-server-new-host2.html After I manually restore the cluster following the instruction in https://docs.cloudera.com/cdp-private-cloud-base/7.1.3/configuring-clusters/topics/cm-api-import-configuration.html, the cluster cannot restart. Found that in the Parcel page in CM Console, the parcels are activating but keep on loading forever. Here are the log message related to parcel from cloudera-scm-agent.log: [02/May/2023 03:29:55 -0400] 6258 MainThread parcel ERROR Error while attempting to modify permissions of file '/opt/ cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hadoop-0.20-mapreduce/sbin/Linux/task-controller'.
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/parcel.py", line 586, in ensure_permissions
file = cmf.util.validate_and_open_fd(path, self.get_parcel_home(parcel))
OSError: [Errno 2] No such file or directory: '/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hadoop-0.20-mapreduc e/sbin/Linux/task-controller'
[02/May/2023 03:29:55 -0400] 6258 MainThread downloader INFO Downloader path: /opt/cloudera/parcel-cache
[02/May/2023 03:29:55 -0400] 6258 MainThread parcel_cache INFO Using /opt/cloudera/parcel-cache for parcel cache
[02/May/2023 03:29:55 -0400] 6258 MainThread throttling_logger WARNING Failed parsing alternatives line: sqoop-export string index out of range link currently points to /opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/bin/sqoop-export
[02/May/2023 03:29:59 -0400] 6258 MainThread parcel_cache INFO Deleting unmanaged parcel CDH-7.1.8-1.cdh7.1.8.p0.30990532
[02/May/2023 03:30:40 -0400] 6258 MainThread parcel_cache INFO Deleting unmanaged parcel SPARK3-3.3.0.3.3.7180.0-274-1.p0. 31212967
[02/May/2023 03:30:40 -0400] 6258 MainThread parcel INFO prepare_environment begin: {}, [], []
[02/May/2023 03:30:40 -0400] 6258 MainThread parcel INFO No parcels activated for use
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:31:30 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:41:21 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:44:42 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:44:43 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:44:43 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 03:56:51 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 04:44:58 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 04:57:36 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {}, [], []
[02/May/2023 04:57:36 -0400] 6258 __run_queue parcel INFO No parcels activated for use
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:05:20 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:05:21 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:05:21 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO prepare_environment begin: {u'SPARK3': u'3.3.0.3.3.7180.0- 274-1.p0.31212967', u'CFM': u'2.1.5.0-215', u'CDH': u'7.1.8-1.cdh7.1.8.p0.30990532'}, [], []
[02/May/2023 05:13:11 -0400] 6258 __run_queue parcel INFO Service does not request any parcels
[02/May/2023 05:30:15 -0400] 28744 MainThread agent INFO To override these variables, use /etc/cloudera-scm-agent/c onfig.ini. Environment variables for CDH locations are not used when CDH is installed from parcels.
[02/May/2023 05:30:19 -0400] 28744 MainThread agent INFO Previously active parcels: {}
[02/May/2023 05:30:19 -0400] 28744 MainThread agent INFO Using parcels directory from server provided value: /opt/c loudera/parcels
[02/May/2023 05:30:19 -0400] 28744 MainThread parcel INFO Agent does create users/groups
[02/May/2023 05:30:19 -0400] 28744 MainThread parcel INFO Agent does parcel permissions
[02/May/2023 05:30:19 -0400] 28744 MainThread downloader INFO Downloader path: /opt/cloudera/parcel-cache
[02/May/2023 05:30:19 -0400] 28744 MainThread parcel_cache INFO Using /opt/cloudera/parcel-cache for parcel cache
[02/May/2023 05:30:19 -0400] 28744 MainThread throttling_logger WARNING Failed parsing alternatives line: sqoop-export string index out of range link currently points to /opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/bin/sqoop-export
[02/May/2023 05:31:59 -0400] 29223 MainThread agent INFO To override these variables, use /etc/cloudera-scm-agent/c onfig.ini. Environment variables for CDH locations are not used when CDH is installed from parcels.
[02/May/2023 05:32:03 -0400] 29223 MainThread agent INFO Previously active parcels: {}
[02/May/2023 05:32:03 -0400] 29223 MainThread agent INFO Using parcels directory from server provided value: /opt/c loudera/parcels Please let me know if I need to provide any further information. Thank you.
... View more
Labels:
04-26-2023
01:39 AM
Hi all, I am configuring Hue and Impala to authenticate using LDAP. LDAP user can successfully login to Hue UI and access Impala using Impala shell through Impala load balancer. When logged in as LDAP user, the LDAP user can run basic hive query like "show database" However, when I turn to Impala session, it shows errors and the LDAP user failed to run Impala query. Here is my configuration in Impala related to LDAP Here is the configuration in Hue especially for Impala load balancer By the way, my CDP cluster enabled Kerberos authentication. Please help me out with this issue and feel free to tell me if I need to provide more information. Thank you in advanced.
... View more
Labels:
04-03-2023
01:45 AM
Hi all, I am practicing Kafak on my CDP 7.1.8 with Kerberos enabled. I can create topics under Kerberos authentication. However, when I test producing and consuming message, the consumer side never receive a message. Here are some screenshot: Consumer: kafka-console-consumer --bootstrap-server host2.my.cloudera.lab:9092 --topic topic001 --from-beginning --cons umer.config /root/kafka/krb-client.properties
23/04/03 04:37:00 INFO utils.Log4jControllerRegistration$: [main]: Registered kafka:type=kafka.Log4jController MBean
23/04/03 04:37:01 INFO consumer.ConsumerConfig: [main]: ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [host2.my.cloudera.lab:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = console-consumer
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = console-consumer-82044
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients .consumer.CooperativeStickyAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = [hidden]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
security.protocol = SASL_PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 45000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
23/04/03 04:37:01 INFO authenticator.AbstractLogin: [main]: Successfully logged in.
23/04/03 04:37:01 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT refresh thread sta rted.
23/04/03 04:37:01 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT valid starting at: 2023-04-03T02:52:45.000-0400
23/04/03 04:37:01 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT expires: 2023-04-0 4T02:52:45.000-0400
23/04/03 04:37:01 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT refresh sleeping u ntil: 2023-04-03T22:11:03.897-0400
23/04/03 04:37:01 INFO utils.AppInfoParser: [main]: Kafka version: 3.1.1.7.1.8.0-801
23/04/03 04:37:01 INFO utils.AppInfoParser: [main]: Kafka commitId: 15839ba4eb998a33
23/04/03 04:37:01 INFO utils.AppInfoParser: [main]: Kafka startTimeMs: 1680511021242
23/04/03 04:37:01 INFO consumer.KafkaConsumer: [main]: [Consumer clientId=console-consumer, groupId=console-consumer-82044] S ubscribed to topic(s): topic001
23/04/03 04:37:01 INFO clients.Metadata: [main]: [Consumer clientId=console-consumer, groupId=console-consumer-82044] Resetti ng the last seen epoch of partition topic001-0 to 0 since the associated topicId changed from null to MyVuTpA9Tfayosq_QihlwA
23/04/03 04:37:01 INFO clients.Metadata: [main]: [Consumer clientId=console-consumer, groupId=console-consumer-82044] Cluster ID: 7vkx3ceERrKii_vcW_gViQ Producer: [root@host1 ~]# kafka-console-producer --broker-list host1.my.cloudera.lab:9092 host2.my.cloudera.lab:9092 --topic topic001 -- producer.config /root/kafka/krb-client.properties
23/04/03 04:37:44 INFO utils.Log4jControllerRegistration$: [main]: Registered kafka:type=kafka.Log4jController MBean
23/04/03 04:37:44 INFO producer.ProducerConfig: [main]: ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [host1.my.cloudera.lab:9092]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = console-producer
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 1000
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 1500
retries = 3
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = [hidden]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
security.protocol = SASL_PLAINTEXT
security.providers = null
send.buffer.bytes = 102400
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
23/04/03 04:37:44 INFO producer.KafkaProducer: [main]: [Producer clientId=console-producer] Instantiated an idempotent produc er.
23/04/03 04:37:44 INFO authenticator.AbstractLogin: [main]: Successfully logged in.
23/04/03 04:37:44 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT refresh thread sta rted.
23/04/03 04:37:44 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT valid starting at: 2023-04-03T02:52:45.000-0400
23/04/03 04:37:44 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT expires: 2023-04-0 4T02:52:45.000-0400
23/04/03 04:37:44 INFO kerberos.KerberosLogin: [kafka-kerberos-refresh-thread-null]: [Principal=null]: TGT refresh sleeping u ntil: 2023-04-03T23:06:05.063-0400
23/04/03 04:37:44 INFO utils.AppInfoParser: [main]: Kafka version: 3.1.1.7.1.8.0-801
23/04/03 04:37:44 INFO utils.AppInfoParser: [main]: Kafka commitId: 15839ba4eb998a33
23/04/03 04:37:44 INFO utils.AppInfoParser: [main]: Kafka startTimeMs: 1680511064283
>23/04/03 04:37:44 INFO clients.Metadata: [kafka-producer-network-thread | console-producer]: [Producer clientId=console-prod ucer] Cluster ID: 7vkx3ceERrKii_vcW_gViQ
23/04/03 04:37:44 INFO internals.TransactionManager: [kafka-producer-network-thread | console-producer]: [Producer clientId=c onsole-producer] ProducerId set to 5 with epoch 0
23/04/03 04:37:48 INFO clients.Metadata: [kafka-producer-network-thread | console-producer]: [Producer clientId=console-produ cer] Resetting the last seen epoch of partition topic001-0 to 0 since the associated topicId changed from null to MyVuTpA9Tfay osq_QihlwA
>
>hello
>world Please help me out with this issue and feel free to tell me if I need to provide more information. Thank you.
... View more
Labels:
04-02-2023
08:11 PM
@ChethanYM Could you please explain further why spark can read Hive managed table by pass this parameter? Thank you very much.
... View more
04-02-2023
08:10 PM
Hi all, I have HDFS service running on my CDP 7.1.8 private cloud base cluster with Kerberos enabled. Recently, I got two issues with my HDFS NameNode, here is the screen capture: The first one The second one: When looking into the role log, it shows Could anyone point out the root cause and the solution for this issue for me please? Thanks in advance. Please let me know if I need to provide more information.
... View more
Labels:
04-02-2023
07:40 PM
@ChethanYM Thank you for your reply. I tried your suggestion by recreating the spark session >>> conf = spark.sparkContext._conf.setAll([('spark.sql.htl.check','false'), ('mapreduce.input.fileinputformat.input.dir.recursive','true')])
>>> spark.sparkContext.stop()
>>> spark = SparkSession.builder.config(conf=conf).getOrCreate() It works fine. Thank you very much.
... View more
03-30-2023
10:50 PM
Hi all, I am practicing spark. When using pyspark to query table in Hive, I can retrieve the data from an external table but query a internal table. Here is the error message: >>> spark.read.table("exams").count()
23/03/30 22:28:50 WARN conf.HiveConf: HiveConf of name hive.masking.algo does not exist
Hive Session ID = eb0a9583-da34-4c85-9a1b-db790d126fb1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/sql/readwriter.py", line 301, in table
return self._df(self._jreader.table(tableName))
File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py",
File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/sql/utils.py", line 69, in deco
raise AnalysisException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.AnalysisException: u'\nSpark has no access to table `default`.`exams`. Clients can access this table only ifMANAGEDINSERTWRITE,HIVEMANAGESTATS,HIVECACHEINVALIDATE,CONNECTORWRITE.\nThis table may be a Hive-managed ACID table, or require some other capability that Spark\ncurrently does not implement;' I know that spark cannot read a ACID Hive table. it there any work around? Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
03-30-2023
10:41 PM
1 Kudo
I managed to fix the issue. As I am using CDSW template, the library install be default is "sklearn". The correct library name should be "scikit-learn".
... View more
03-30-2023
07:25 PM
1 Kudo
Thank you @RangaReddy, I managed to solve the problem using your advice. Thank you very much.
... View more