Member since
09-29-2015
5226
Posts
22
Kudos Received
34
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1392 | 07-13-2022 07:05 AM | |
3584 | 08-11-2021 05:29 AM | |
2327 | 07-07-2021 01:04 AM | |
1574 | 07-06-2021 03:24 AM | |
3545 | 06-07-2021 11:12 PM |
08-05-2020
05:44 AM
Hi @Bender , Thanks a lot for your answers. I will open a case in Cloudera Support for it. Kind regards, Daniel
... View more
08-03-2020
07:51 AM
Hello @MeenaK , thank you for reaching out. This community thread mentions, that "The ConvertJsonToAvro processor was removed from the default NiFi distribution bundle because of space limitations as of the Apache NiFi 1.10 release.". this thread points to a repo of the Apache Nifi. Hope it helps! Kind regards: Ferenc
... View more
07-31-2020
01:43 AM
Hello @agsumeet , @direcision , @Dorown and @abinanths , thank you for reaching out! I have checked internally and the issue described in this thread looks like HIVE-16683 / SPARK-26932 As of now, there is no fix backported to any CDH release (at the time of writing it means CDH6.3.3 and earlier). Should you have a Cloudera Support Subscription, please file a support case with us for further assistance! Thank you: Ferenc
... View more
07-30-2020
01:36 AM
Hello @AdityaShaw , sorry I have missed addressing your enquiry about the server 500 error. Based on the KB article you referenced, it is related to the bug. Do you still have the server 500 error once hive.server2.thrift.http.cookie.auth.enabled is set to false, please? Should you experience disconnects even after applying the workaround, you might have too many HTTPClient connections, which are too high for KNOX to handle at a time. Please only change any of the parameters listed below once you tested that the above workaround was not sufficient. Based on my research the below parameters are still safe to apply. Before applying any changes in production, please test out the new values in a non-prod environment, as we are not familiar with your use cases and how your cluster/workload is designed: gateway.httpclient.connectionTimeout=600000 (10 min) gateway.httpclient.socketTimeout=600000 (10 min) gateway.metrics.enabled=false gateway.jmx.metrics.reporting.enabled=false gateway.httpclient.maxConnections=128 Kind regards: Ferenc
... View more
07-28-2020
06:36 AM
Hello @Jua , thank you for reporting the issue of CM not being able to connect to it's embedded database after installing manually postgres client rpm. I understand that you already tried to start the scm server db. Do you still face the issue, please or was it resolved since? Please verify if: the postgres database is running and can be accessed? the db is running... once you kill the db process and restart the cloudera-scm-server-db.service, does it fix the issue? Thank you: Ferenc
... View more
07-25-2020
03:41 AM
@Bender , Thank you. This helped is resolving the error.
... View more
07-24-2020
10:14 AM
Hi @Bender As provided in the link I tried to produce thread & dump files from the running process, but as I mentioned earlier those process were getting killed/throwing error. Here is the output am getting when I run jmap as per the doc/link provided [yarn@us-east-1a-test-east-cdh-tasknode5152 process]$ ps -fe | grep nodemanager
yarn 5235 12503 0 13:07 ? 00:00:00 /usr/lib/jvm/java-openjdk/bin/java -Dproc_nodemanager -Xmx1000m -Djava.net.preferIPv4Stack=true -server -Xms1073741824 -Xmx1073741824 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Dlibrary.leveldbjni.path=/run/cloudera-scm-agent/process/109-yarn-NODEMANAGER -Dhadoop.event.appender=,EventCatcher -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/CD-YARN-QafZaOEK_CD-YARN-QafZaOEK-NODEMANAGER-4756e03a64cd1a4e535550d4cd740b08_pid5235.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -Dhadoop.log.dir=/var/log/hadoop-yarn -Dyarn.log.dir=/var/log/hadoop-yarn -Dhadoop.log.file=hadoop-cmf-CD-YARN-QafZaOEK-NODEMANAGER-us-east-1a-test-east-cdh-tasknode5152.throtle-test.internal.log.out -Dyarn.log.file=hadoop-cmf-CD-YARN-QafZaOEK-NODEMANAGER-us-east-1a-test-east-cdh-tasknode5152.throtle-test.internal.log.out -Dyarn.home.dir=/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-yarn -Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-yarn -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop/lib/native -classpath /run/cloudera-scm-agent/process/109-yarn-NODEMANAGER:/run/cloudera-scm-agent/process/109-yarn-NODEMANAGER:/run/cloudera-scm-agent/process/109-yarn-NODEMANAGER:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop/lib/*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop/.//*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-hdfs/./:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-hdfs/lib/*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-hdfs/.//*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-yarn/lib/*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-yarn/.//*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-mapreduce/lib/*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-mapreduce/.//*:/usr/share/cmf/lib/plugins/event-publish-5.16.2-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.16.2.jar:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-yarn/.//*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-yarn/lib/*:/run/cloudera-scm-agent/process/109-yarn-NODEMANAGER/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager
yarn 5240 5235 0 13:07 ? 00:00:00 python2.7 /usr/lib64/cmf/agent/build/env/bin/cmf-redactor /usr/lib64/cmf/service/yarn/yarn.sh nodemanager
yarn 5487 31141 0 13:07 pts/0 00:00:00 grep --color=auto nodemanager
[yarn@us-east-1a-test-east-cdh-tasknode5152 process]$
[yarn@us-east-1a-test-east-cdh-tasknode5152 process]$
[yarn@us-east-1a-test-east-cdh-tasknode5152 process]$ /usr/lib/jvm/java-openjdk/bin/jmap -heap 5235 > /tmp/jmap_5235_heap.out
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.tools.jmap.JMap.runTool(JMap.java:201)
at sun.tools.jmap.JMap.main(JMap.java:130)
Caused by: java.lang.NullPointerException
at sun.jvm.hotspot.tools.HeapSummary.run(HeapSummary.java:157)
at sun.jvm.hotspot.tools.Tool.startInternal(Tool.java:260)
at sun.jvm.hotspot.tools.Tool.start(Tool.java:223)
at sun.jvm.hotspot.tools.Tool.execute(Tool.java:118)
at sun.jvm.hotspot.tools.HeapSummary.main(HeapSummary.java:50)
... 6 more Here is the error message I am getting when I run jstack with -l option [yarn@us-east-1a-test-east-cdh-tasknode5152 process]$ ps -fe | grep nodemanager
yarn 4518 12503 0 13:04 ? 00:00:00 /usr/lib/jvm/java-openjdk/bin/java -Dproc_nodemanager -Xmx1000m -Djava.net.preferIPv4Stack=true -server -Xms1073741824 -Xmx1073741824 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Dlibrary.leveldbjni.path=/run/cloudera-scm-agent/process/109-yarn-NODEMANAGER -Dhadoop.event.appender=,EventCatcher -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/CD-YARN-QafZaOEK_CD-YARN-QafZaOEK-NODEMANAGER-4756e03a64cd1a4e535550d4cd740b08_pid4518.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -Dhadoop.log.dir=/var/log/hadoop-yarn -Dyarn.log.dir=/var/log/hadoop-yarn -Dhadoop.log.file=hadoop-cmf-CD-YARN-QafZaOEK-NODEMANAGER-us-east-1a-test-east-cdh-tasknode5152.throtle-test.internal.log.out -Dyarn.log.file=hadoop-cmf-CD-YARN-QafZaOEK-NODEMANAGER-us-east-1a-test-east-cdh-tasknode5152.throtle-test.internal.log.out -Dyarn.home.dir=/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-yarn -Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-yarn -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop/lib/native -classpath /run/cloudera-scm-agent/process/109-yarn-NODEMANAGER:/run/cloudera-scm-agent/process/109-yarn-NODEMANAGER:/run/cloudera-scm-agent/process/109-yarn-NODEMANAGER:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop/lib/*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop/.//*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-hdfs/./:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-hdfs/lib/*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-hdfs/.//*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-yarn/lib/*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-yarn/.//*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-mapreduce/lib/*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-mapreduce/.//*:/usr/share/cmf/lib/plugins/event-publish-5.16.2-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.16.2.jar:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-yarn/.//*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop-yarn/lib/*:/run/cloudera-scm-agent/process/109-yarn-NODEMANAGER/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager
yarn 4523 4518 0 13:04 ? 00:00:00 python2.7 /usr/lib64/cmf/agent/build/env/bin/cmf-redactor /usr/lib64/cmf/service/yarn/yarn.sh nodemanager
yarn 4717 31141 0 13:05 pts/0 00:00:00 grep --color=auto nodemanager
[yarn@us-east-1a-test-east-cdh-tasknode5152 process]$
[yarn@us-east-1a-test-east-cdh-tasknode5152 process]$
[yarn@us-east-1a-test-east-cdh-tasknode5152 process]$
[yarn@us-east-1a-test-east-cdh-tasknode5152 process]$ /usr/lib/jvm/java-openjdk/bin/jstack -F 4518 > /tmp/jstack_4518_f.out
[yarn@us-east-1a-test-east-cdh-tasknode5152 process]$ less /tmp/jstack_4518_f.out
[yarn@us-east-1a-test-east-cdh-tasknode5152 process]$
[yarn@us-east-1a-test-east-cdh-tasknode5152 process]$
[yarn@us-east-1a-test-east-cdh-tasknode5152 process]$
[yarn@us-east-1a-test-east-cdh-tasknode5152 process]$ /usr/lib/jvm/java-openjdk/bin/jstack -l 4518 > /tmp/jstack_4518_f.out
4518: Unable to open socket file: target process not responding or HotSpot VM not loaded
The -F option can be used when the target process is not responding For the process 4518 I ran jstack with -F option and here is the output Debugger attached successfully.
Server compiler detected.
JVM version is 25.181-b13
Deadlock Detection:
No deadlocks found.
... View more
07-24-2020
12:42 AM
Hello @gss2020 , thank you for exploring our HDP Sandbox. For the default passwords, please see our "Learning the Ropes of the HDP Sandbox" tutorial page. Within this, please have a look at "Appendix A: Reference Sheet" for the login credentials. Please let me know if you need any further input from us. Thank you: Ferenc
... View more
07-22-2020
12:45 AM
Hello @netapp1 , thank you for your deep dive into the CDH6.x clusters CM Express version's 100 nodes limitation. The intent from Cloudera is to limit Cloudera Express 6.x usage to 100 nodes. The implementation on how it was enforced from a User Experience point of view is different between minor versions (e.g. 6.0. vs 6.1. vs 6.2.). It means that earlier versions might let you install the agents across the nodes and once the installation completes it's blocks most of the functionalities until you decrease the number of nodes to 100. We recognised that it would be best to block installation until the nodes are not exceeding 100 nodes and in later version it's been gradually improved. Hence the discrepancy between the descriptions in the documentation. In short: 100 node limitation is enforced across Cloudera Express 6.x versions. Please let us know if we addressed your enquiry on this topic by pressing the "Accept as Solution" button, which helps other members to find the answer of a similar question. Thank you: Feren
... View more
07-21-2020
01:59 AM
Hello @getschwifty , did you have a chance to test if cross-realm trust works OK between your nodes from different realms? Kind regards: Ferenc
... View more