Member since
02-17-2019
30
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1760 | 05-19-2020 08:47 AM |
05-31-2022
07:35 AM
Hi Chethan, Thank you for the reply. The parcel version was selected automatically after we clicked 'download' in the Cloudera Manager. I suppose it is the correct version. 2022-05-24 11:04:57,774 INFO New I/O worker #340:com.cloudera.parcel.components.ParcelDownloaderImpl: Completed download of: https://archive.cloudera.com/kafka/parcels/latest/KAFKA-4.1.0-1.4.1.0.p0.4-el7.parcel However from this page, it shows for CDH 6.3.4, the Apache Kafka version is 2.2.1 https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_63_packaging.html Not sure how the parcel version and Apache version relate to each other. We have the complete log file, but I hesitate to send the log here in an open forum. Is there a way to send to you in a protected mode? Thanks again!
... View more
05-24-2022
09:11 AM
Hi Experts: Our Big Data cluster is running CDH 6.3.4. Today we are trying to install and activate Kafka. It seems that the "KAFKA-4.1.0-1.4.1.0.p0.4-el7.parcel" downloading worked fine. It failed to activate the parcel. In the file "cloudera-scm-server.log", I see this error: 2022-05-24 11:05:56,777 INFO scm-web-268076:com.cloudera.enterprise.JavaMelodyFacade: Exiting HTTP Operation: Method:POST, Path:/parcel/details, Status:200 2022-05-24 11:05:58,414 INFO scm-web-267966:com.cloudera.enterprise.JavaMelodyFacade: Entering HTTP Operation: Method:POST, Path:/clusters/1/activateParcel 2022-05-24 11:05:58,418 INFO scm-web-267966:com.cloudera.parcel.components.ParcelManagerImpl: Activating parcel KAFKA:4.1.0-1.4.1.0.p0.4 on cluster Peel 2022-05-24 11:05:58,429 WARN scm-web-267966:com.cloudera.server.web.cmf.WebController: Failed to activate parcel KAFKA:4.1.0-1.4.1.0.p0.4 for cluster DbCluster{id=1, name=Peel} 2022-05-24 11:05:58,429 ERROR scm-web-267966:com.cloudera.server.web.common.JsonResponse: JsonResponse created with throwable: com.cloudera.parcel.ParcelRelationsException: CDH 6.3.4-1.cdh6.3.4.p0.6626826 replaces KAFKA 4.1.0-1.4.1.0.p0.4. at com.cloudera.parcel.components.ParcelDependencyManagerImpl.validateDependencies(ParcelDependencyManagerImpl.java:418) at com.cloudera.parcel.components.ParcelDependencyManagerImpl.validateDependenciesForActivation(ParcelDependencyManagerImpl.java:375) at com.cloudera.parcel.components.ParcelManagerImpl.activateParcel(ParcelManagerImpl.java:295) at com.cloudera.parcel.components.ParcelManagerImpl.activateParcel(ParcelManagerImpl.java:246) at com.cloudera.server.web.cmf.parcel.ParcelActivationController.activateParcel(ParcelActivationController.java:61) at com.cloudera.server.web.cmf.parcel.ParcelActivationController$$FastClassBySpringCGLIB$$9b87c4cb.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:736) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:84) at com.cloudera.server.web.cmf.aop.RetryExecution$1.call(RetryExecution.java:32) at com.cloudera.server.common.RetryWrapper.executeWithRetry(RetryWrapper.java:32) at com.cloudera.server.common.RetryUtils.executeWithRetryHelper(RetryUtils.java:210) at com.cloudera.server.common.RetryUtils.executeWithRetry(RetryUtils.java:131) at com.cloudera.server.web.cmf.aop.RetryExecution.retryOperation(RetryExecution.java:24) at sun.reflect.GeneratedMethodAccessor1683.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) Could you please give some hint where to check for the problem? Thank you!
... View more
Labels:
- Labels:
-
Apache Kafka
09-27-2021
11:16 AM
Hi Chelia, The fix does make sense. Thank you. Since we are here, can I ask another question: when we want to upgrade our Cloudera CDH-6.3.4, what is the newer software version that we could upgrade to, and hopefully will run on RHEL 8.x? Thanks again! Regards, Vincent
... View more
09-23-2021
12:48 PM
Hi experts: We are on the version Cloudera CDH-6.3.4. When users exit from login nodes, we see the error appearing in /var/log/messages from time to time: Sep 23 15:06:37 hlog-2 cm: Process Process-10786:
Sep 23 15:06:37 hlog-2 cm: Traceback (most recent call last):
Sep 23 15:06:37 hlog-2 cm: File "/usr/lib64/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Sep 23 15:06:37 hlog-2 cm: self.run()
Sep 23 15:06:37 hlog-2 cm: File "/usr/lib64/python2.7/multiprocessing/process.py", line 114, in run
Sep 23 15:06:37 hlog-2 cm: self._target(*self._args, **self._kwargs)
Sep 23 15:06:37 hlog-2 cm: File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/monitor/host/filesystem_map.py", line 27, in disk_usage_wrapper
Sep 23 15:06:37 hlog-2 cm: usage = psutil.disk_usage(p)
Sep 23 15:06:37 hlog-2 cm: File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/psutil/__init__.py", line 1947, in disk_usage
Sep 23 15:06:37 hlog-2 cm: return _psplatform.disk_usage(path)
Sep 23 15:06:37 hlog-2 cm: File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/psutil/_psposix.py", line 131, in disk_usage
Sep 23 15:06:37 hlog-2 cm: st = os.statvfs(path)
Sep 23 15:06:37 hlog-2 cm: OSError: [Errno 2] No such file or directory: '/run/user/3336512' It seems that the error does not cause any harm. But it is annoying. We are on Red Hat Enterprise 7.8. Any suggestions on how to fix this are very welcome! Thanks, Vincent.
... View more
Labels:
01-21-2021
11:53 AM
Hi experts:
The Hadoop version coming with CDH-6.3.4 is Hadoop 3.0.0-cdh6.3.4. The Apache Spark web site does not have a prebuilt tarball for Hadoop 3.0.0, so I downloaded "spark-3.0.1-bin-hadoop3.2.tgz". Untar'red and tried it on our CDH 6.3.4 cluster.
Simple Spark line counting works fine. But in a pyspark session 'show tables' in a hive database working fine, but creating a table fails with an error as:
pyspark.sql.utils.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch table messages1. Invalid method name: 'get_table_req';
That is very similar to what is described here:
https://stackoverflow.com/questions/63476121/hive-queries-failing-with-unable-to-fetch-table-test-table-invalid-method-name
I tried to replace these hive related jars under Spark 3.0.1 jars subdirectory with the correspondent ones in /opt/cloudera/parcels/CDH-6.3.4-1.cdh6.3.4.p0.6626826/jars, it does not help - failed with different error.
Does anyone have some experience with running Spark 3 in a CDH 6.3.x cluster? Can you suggest anything to try?
Your help is greatly appreciated!
Regards.
Vincent
... View more
Labels:
09-03-2020
07:24 AM
Hi experts:
There is a node on which the DataNode process is restarted frequently by the supervisord. Other nodes in the cluster with same hardware and configurations do not see such issues. We are on the version CDH-5.15.2-1. Could you please advise where to look for the reason? Thank you.
In the log file 'hadoop-cmf-hdfs-DATANODE-compute-1-14.local.log.out', for today we see:
bash-4.1# grep -B 2 "STARTUP_MSG: Starting DataNode" hadoop-cmf-hdfs-DATANODE-compute-1-14.local.log.out
2020-09-03 02:49:08,033 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DataNode -- 2020-09-03 03:48:31,912 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DataNode -- 2020-09-03 05:25:37,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DataNode -- 2020-09-03 08:26:25,445 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DataNode -- 2020-09-03 08:42:48,882 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DataNode
These correspond to what is the supervisord log /var/log/cloudera-scm-agent/supervisord.log:
2020-09-03 02:49:06,297 INFO exited: 64450-hdfs-DATANODE (terminated by SIGKILL; not expected) 2020-09-03 02:49:07,300 INFO spawned: '64450-hdfs-DATANODE' with pid 94527 2020-09-03 02:49:07,300 INFO Increased RLIMIT_MEMLOCK limit to 4294967296 2020-09-03 02:49:27,361 INFO success: 64450-hdfs-DATANODE entered RUNNING state, process has stayed up for > than 20 seconds (startsecs)
2020-09-03 03:48:31,094 INFO exited: 64450-hdfs-DATANODE (terminated by SIGKILL; not expected) 2020-09-03 03:48:31,166 INFO spawned: '64450-hdfs-DATANODE' with pid 107591 2020-09-03 03:48:31,166 INFO Increased RLIMIT_MEMLOCK limit to 4294967296 2020-09-03 03:48:51,368 INFO success: 64450-hdfs-DATANODE entered RUNNING state, process has stayed up for > than 20 seconds (startsecs)
2020-09-03 05:25:36,275 INFO exited: 64450-hdfs-DATANODE (terminated by SIGKILL; not expected) 2020-09-03 05:25:37,277 INFO spawned: '64450-hdfs-DATANODE' with pid 127966 2020-09-03 05:25:37,278 INFO Increased RLIMIT_MEMLOCK limit to 4294967296 2020-09-03 05:25:57,338 INFO success: 64450-hdfs-DATANODE entered RUNNING state, process has stayed up for > than 20 seconds (startsecs)
2020-09-03 08:26:23,687 INFO exited: 64450-hdfs-DATANODE (terminated by SIGKILL; not expected) 2020-09-03 08:26:24,690 INFO spawned: '64450-hdfs-DATANODE' with pid 18960 2020-09-03 08:26:24,690 INFO Increased RLIMIT_MEMLOCK limit to 4294967296 2020-09-03 08:26:44,752 INFO success: 64450-hdfs-DATANODE entered RUNNING state, process has stayed up for > than 20 seconds (startsecs)
2020-09-03 08:42:47,139 INFO exited: 64450-hdfs-DATANODE (terminated by SIGKILL; not expected) 2020-09-03 08:42:48,142 INFO spawned: '64450-hdfs-DATANODE' with pid 22506 2020-09-03 08:42:48,142 INFO Increased RLIMIT_MEMLOCK limit to 4294967296 2020-09-03 08:43:08,205 INFO success: 64450-hdfs-DATANODE entered RUNNING state, process has stayed up for > than 20 seconds (startsecs)
... View more
Labels:
06-03-2020
07:08 AM
Hi paras, you are very helpful. The error disappears now after making the configuration modification as you suggested. Thank you!
... View more
06-02-2020
07:38 AM
Hi Experts:
Following the instruction as in https://docs.cloudera.com/documentation/enterprise/6/latest/topics/install_cm_cdh.html
, we set up a test CDH 6.3.3 cluster. We also enabled Kerberos for security. The Hive, Impala, HBase command line clients all make connections and are functioning basically. Most parts of Hue works too, except the HBase Browser throws error - "Api Error: TSocket read 0 bytes". Please see below the Hue access.log and HBase ThriftServer log. What to check further to resolve the issue? Thank you!
/var/log/hue/access.log
---------------------------
[02/Jun/2020 07:16:11 -0700] INFO 173.2.217.185 xe46 - "POST /hbase/api/getClusters HTTP/1.1" returned in 11ms [02/Jun/2020 07:16:11 -0700] INFO 173.2.217.185 xe46 - "POST /notebook/api/autocomplete/default HTTP/1.1" returned in 152ms [02/Jun/2020 07:16:12 -0700] INFO 173.2.217.185 xe46 - "POST /hbase/api/getTableList/HBase HTTP/1.1" returned in 96ms [02/Jun/2020 07:16:12 -0700] ERROR 173.2.217.185 xe46 - "POST /desktop/log_js_error HTTP/1.1"-- JS ERROR: {"msg":"Uncaught SyntaxError: Unexpected token ':'","url":"https://35.226.68.232:8888/hue/hbase/#HBase","line":2,"column":10, "stack":"SyntaxError: Unexpected token ':'\n at w (https://35.226.68.232:8888/static/desktop/js/bundles/hue/vendors~hue~notebook-bundle-ba716af7db7997b47d29.a4ce11024956.js:37:676)\n at Function.globalEval (https://35.226.68.232:8888/static/desktop/js/bundles/hue/vendors~hue~notebook-bundle-ba716af7db7997b47d29.a4ce11024956.js:37:2584)\n at text script (https://35.226.68.232:8888/static/desktop/js/bundles/hue/vendors~hue~notebook-bundle-ba716af7db7997b47d29.a4ce11024956.js:48:76954)\n at https://35.226.68.232:8888/static/desktop/js/bundles/hue/vendors~hue~notebook-bundle-ba716af7db7997b47d29.a4ce11024956.js:48:73527\n at C (https://35.226.68.232:8888/static/desktop/js/bundles/hue/vendors~hue~notebook-bundle-ba716af7db7997b47d29.a4ce11024956.js:48:73644)\n at XMLHttpRequest.<anonymous> (https://35.226.68.232:8888/static/desktop/js/bundles/hue/vendors~hue~notebook-bundle-ba716af7db7997b47d29.a4ce11024956.js:48:76224)"} [02/Jun/2020 07:16:12 -0700] INFO 173.2.217.185 xe46 - "POST /desktop/log_js_error HTTP/1.1" returned in 3ms
/var/log/hbase/hbase-cmf-hbase-HBASETHRIFTSERVER-master-node1.c.nyu-xeep-eosp-xbmo.internal.log.out
----------------------------
2020-06-02 14:16:12,002 INFO org.apache.hadoop.hbase.thrift.ThriftServerRunner: Effective user: hue 2020-06-02 14:16:12,007 ERROR org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer: Thrift error occurred during processing of message. org.apache.thrift.protocol.TProtocolException: Expected protocol id ffffff82 but got ffffff80 at org.apache.thrift.protocol.TCompactProtocol.readMessageBegin(TCompactProtocol.java:503) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) at org.apache.hadoop.hbase.thrift.ThriftServerRunner.lambda$setupServer$0(ThriftServerRunner.java:656) at org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnnection.run(TBoundedThreadPoolServer.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-06-02 14:16:12,019 INFO org.apache.hadoop.hbase.thrift.ThriftServerRunner: Effective user: hue 2020-06-02 14:16:12,020 ERROR org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer: Thrift error occurred during processing of message. org.apache.thrift.protocol.TProtocolException: Expected protocol id ffffff82 but got ffffff80 at org.apache.thrift.protocol.TCompactProtocol.readMessageBegin(TCompactProtocol.java:503) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) at org.apache.hadoop.hbase.thrift.ThriftServerRunner.lambda$setupServer$0(ThriftServerRunner.java:656) at org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnnection.run(TBoundedThreadPoolServer.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-06-02 14:16:12,031 INFO org.apache.hadoop.hbase.thrift.ThriftServerRunner: Effective user: hue 2020-06-02 14:16:12,032 ERROR org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer: Thrift error occurred during processing of message. org.apache.thrift.protocol.TProtocolException: Expected protocol id ffffff82 but got ffffff80 at org.apache.thrift.protocol.TCompactProtocol.readMessageBegin(TCompactProtocol.java:503) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) at org.apache.hadoop.hbase.thrift.ThriftServerRunner.lambda$setupServer$0(ThriftServerRunner.java:656) at org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnnection.run(TBoundedThreadPoolServer.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
... View more
Labels:
05-19-2020
08:47 AM
Hi paras, resetting the hostnames to long names got me moving forward again. Thank you!
... View more