Member since
02-17-2019
30
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2753 | 05-19-2020 08:47 AM |
09-27-2021
11:16 AM
Hi Chelia, The fix does make sense. Thank you. Since we are here, can I ask another question: when we want to upgrade our Cloudera CDH-6.3.4, what is the newer software version that we could upgrade to, and hopefully will run on RHEL 8.x? Thanks again! Regards, Vincent
... View more
09-23-2021
12:48 PM
Hi experts: We are on the version Cloudera CDH-6.3.4. When users exit from login nodes, we see the error appearing in /var/log/messages from time to time: Sep 23 15:06:37 hlog-2 cm: Process Process-10786:
Sep 23 15:06:37 hlog-2 cm: Traceback (most recent call last):
Sep 23 15:06:37 hlog-2 cm: File "/usr/lib64/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Sep 23 15:06:37 hlog-2 cm: self.run()
Sep 23 15:06:37 hlog-2 cm: File "/usr/lib64/python2.7/multiprocessing/process.py", line 114, in run
Sep 23 15:06:37 hlog-2 cm: self._target(*self._args, **self._kwargs)
Sep 23 15:06:37 hlog-2 cm: File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/monitor/host/filesystem_map.py", line 27, in disk_usage_wrapper
Sep 23 15:06:37 hlog-2 cm: usage = psutil.disk_usage(p)
Sep 23 15:06:37 hlog-2 cm: File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/psutil/__init__.py", line 1947, in disk_usage
Sep 23 15:06:37 hlog-2 cm: return _psplatform.disk_usage(path)
Sep 23 15:06:37 hlog-2 cm: File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/psutil/_psposix.py", line 131, in disk_usage
Sep 23 15:06:37 hlog-2 cm: st = os.statvfs(path)
Sep 23 15:06:37 hlog-2 cm: OSError: [Errno 2] No such file or directory: '/run/user/3336512' It seems that the error does not cause any harm. But it is annoying. We are on Red Hat Enterprise 7.8. Any suggestions on how to fix this are very welcome! Thanks, Vincent.
... View more
Labels:
06-03-2020
07:08 AM
Hi paras, you are very helpful. The error disappears now after making the configuration modification as you suggested. Thank you!
... View more
06-02-2020
07:38 AM
Hi Experts:
Following the instruction as in https://docs.cloudera.com/documentation/enterprise/6/latest/topics/install_cm_cdh.html
, we set up a test CDH 6.3.3 cluster. We also enabled Kerberos for security. The Hive, Impala, HBase command line clients all make connections and are functioning basically. Most parts of Hue works too, except the HBase Browser throws error - "Api Error: TSocket read 0 bytes". Please see below the Hue access.log and HBase ThriftServer log. What to check further to resolve the issue? Thank you!
/var/log/hue/access.log
---------------------------
[02/Jun/2020 07:16:11 -0700] INFO 173.2.217.185 xe46 - "POST /hbase/api/getClusters HTTP/1.1" returned in 11ms [02/Jun/2020 07:16:11 -0700] INFO 173.2.217.185 xe46 - "POST /notebook/api/autocomplete/default HTTP/1.1" returned in 152ms [02/Jun/2020 07:16:12 -0700] INFO 173.2.217.185 xe46 - "POST /hbase/api/getTableList/HBase HTTP/1.1" returned in 96ms [02/Jun/2020 07:16:12 -0700] ERROR 173.2.217.185 xe46 - "POST /desktop/log_js_error HTTP/1.1"-- JS ERROR: {"msg":"Uncaught SyntaxError: Unexpected token ':'","url":"https://35.226.68.232:8888/hue/hbase/#HBase","line":2,"column":10, "stack":"SyntaxError: Unexpected token ':'\n at w (https://35.226.68.232:8888/static/desktop/js/bundles/hue/vendors~hue~notebook-bundle-ba716af7db7997b47d29.a4ce11024956.js:37:676)\n at Function.globalEval (https://35.226.68.232:8888/static/desktop/js/bundles/hue/vendors~hue~notebook-bundle-ba716af7db7997b47d29.a4ce11024956.js:37:2584)\n at text script (https://35.226.68.232:8888/static/desktop/js/bundles/hue/vendors~hue~notebook-bundle-ba716af7db7997b47d29.a4ce11024956.js:48:76954)\n at https://35.226.68.232:8888/static/desktop/js/bundles/hue/vendors~hue~notebook-bundle-ba716af7db7997b47d29.a4ce11024956.js:48:73527\n at C (https://35.226.68.232:8888/static/desktop/js/bundles/hue/vendors~hue~notebook-bundle-ba716af7db7997b47d29.a4ce11024956.js:48:73644)\n at XMLHttpRequest.<anonymous> (https://35.226.68.232:8888/static/desktop/js/bundles/hue/vendors~hue~notebook-bundle-ba716af7db7997b47d29.a4ce11024956.js:48:76224)"} [02/Jun/2020 07:16:12 -0700] INFO 173.2.217.185 xe46 - "POST /desktop/log_js_error HTTP/1.1" returned in 3ms
/var/log/hbase/hbase-cmf-hbase-HBASETHRIFTSERVER-master-node1.c.nyu-xeep-eosp-xbmo.internal.log.out
----------------------------
2020-06-02 14:16:12,002 INFO org.apache.hadoop.hbase.thrift.ThriftServerRunner: Effective user: hue 2020-06-02 14:16:12,007 ERROR org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer: Thrift error occurred during processing of message. org.apache.thrift.protocol.TProtocolException: Expected protocol id ffffff82 but got ffffff80 at org.apache.thrift.protocol.TCompactProtocol.readMessageBegin(TCompactProtocol.java:503) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) at org.apache.hadoop.hbase.thrift.ThriftServerRunner.lambda$setupServer$0(ThriftServerRunner.java:656) at org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnnection.run(TBoundedThreadPoolServer.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-06-02 14:16:12,019 INFO org.apache.hadoop.hbase.thrift.ThriftServerRunner: Effective user: hue 2020-06-02 14:16:12,020 ERROR org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer: Thrift error occurred during processing of message. org.apache.thrift.protocol.TProtocolException: Expected protocol id ffffff82 but got ffffff80 at org.apache.thrift.protocol.TCompactProtocol.readMessageBegin(TCompactProtocol.java:503) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) at org.apache.hadoop.hbase.thrift.ThriftServerRunner.lambda$setupServer$0(ThriftServerRunner.java:656) at org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnnection.run(TBoundedThreadPoolServer.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2020-06-02 14:16:12,031 INFO org.apache.hadoop.hbase.thrift.ThriftServerRunner: Effective user: hue 2020-06-02 14:16:12,032 ERROR org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer: Thrift error occurred during processing of message. org.apache.thrift.protocol.TProtocolException: Expected protocol id ffffff82 but got ffffff80 at org.apache.thrift.protocol.TCompactProtocol.readMessageBegin(TCompactProtocol.java:503) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) at org.apache.hadoop.hbase.thrift.ThriftServerRunner.lambda$setupServer$0(ThriftServerRunner.java:656) at org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnnection.run(TBoundedThreadPoolServer.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
... View more
Labels:
05-19-2020
08:47 AM
Hi paras, resetting the hostnames to long names got me moving forward again. Thank you!
... View more
05-18-2020
05:42 PM
The cluster consists of six VM nodes in GCP: 2 master nodes, 1 login node, and 3 data nodes.
... View more
05-18-2020
05:40 PM
Hi, Following the instruction described on this page: https://docs.cloudera.com/documentation/enterprise/6/latest/topics/installation.html I made some good progress until up to step 6: Install CDH and Other Software: https://docs.cloudera.com/documentation/enterprise/6/latest/topics/install_software_cm_wizard.html Now I encountered error. Please see the picture below: I completely wiped out the databases, retried twice, and still seeing the error. Could you advise where to check to fix the trouble, and move forward? Thank you!
... View more
Labels:
05-23-2019
06:48 PM
Thanks a lot for your reply Harsh. These sound great. Can you give some pointers to some learning materials on both methods, i.e. examples, blogs, URLs or books etc?
... View more
05-18-2019
05:43 AM
We have ten millions image and video files, are looking for efficient ways to store them in Hadoop (HDFS ...), and analyze them with tools available in the Hadoop ecosystem. I understand HDFS prefer big files. These image files are small, they are under ten megabytes. Please advise. Thanks very much!
... View more
Labels:
- Labels:
-
HDFS
04-29-2019
07:56 AM
It's very nice to know the upcoming updates. So with the feature in place, we would be able to assign some nodes exclusively to certain users (e.g. user userA, application appX) for their processing requirements. The question following naturally is about data storage locations: can other users' applications still read and write to the assigned nodes? will appX output files be written to the assigned nodes only, will appX be allowed to read input file blocks from all nodes in the cluster? This is too much I know. Thanks a lot!
... View more