Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2030 | 04-27-2020 03:48 AM | |
4020 | 04-26-2020 06:18 PM | |
3247 | 04-26-2020 06:05 PM | |
2599 | 04-13-2020 08:53 PM | |
3862 | 03-31-2020 02:10 AM |
01-28-2020
12:56 AM
@asmarz Please refer to the following doc in order to know how you can enable SPNEGO authentication. Once you have enabled Kerberos for your cluster after that you can also enable the SPNEGO authentication. The following doc explains how to configure HTTP authentication for Hadoop components in a Kerberos environment. By default, access to the HTTP-based services and UIs for the cluster are not configured to require authentication. 1. https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/authentication-with-kerberos/content/authe_spnego_enabling_spnego_authentication_for_hadoop.html 2. https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/managing-and-monitoring-ambari/content/amb_start_kerberos_wizard_from_ambari_web.html
... View more
01-27-2020
10:22 PM
1 Kudo
@VijaySankar What is your NiFi Version? Looks like there is already a similar JIRA [1] Which seems to be addressing this behaviour of "ExecuteStreamCommand" in the NiFi version "1.10.0". [1] ExecuteStreamCommand filters out any double quotes when parsing the "Command Arguments" https://issues.apache.org/jira/browse/NIFI-3221
... View more
01-27-2020
02:13 PM
@ChineduLB The "MyClassName" class name is Fully Qualified Classname? (It means do you have any Package name for this class? If yes then can you please specify the fully qualified classname like "--class aaa.bbb.MyClassName") Also can you please list the JAR to see if it has the class placed inside ti correctly? Please share the output of the mentioned command. # jar -tvf /projects/myscala.jar
# javap -cp /projects/myscala.jar MyClassName .
... View more
01-14-2020
01:31 PM
@nk_11 There are some recommendations from the HDFS Balancer perspective to make sure it runs fast with max performance. Like some of the parameters described in the link as : "dfs.datanode.balance.max.concurrent.moves", "dfs.balancer.max-size-to-move", "dfs.balancer.moverThreads" and "dfs.datanode.balance.max.bandwidthPerSec" https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/data-storage/content/recommended_configurations_for_the__balancer.html . . Regarding the YARN "local-dirs" heavy usage, Please refer to the following article which might give a better idea. You can also refer to the following yarn-site properties to get it better tuned. The "yarn.nodemanager.local-dirs" is the property that points to the location where the intermediate data (temporary data) is written on the nodes where the NodeManager runs. The NodeManager service runs on all worker nodes. Please check if this dir has enough space. The "yarn.nodemanager.localizer.cache.target-size-mb" property defines decides the maximum disk space to be used for localizing resources. Once the total disk size of the cache exceeds the value defined in this property the deletion service will try to remove files which are not used by any running containers. The "yarn.nodemanager.localizer.cache.cleanup.interval-ms": defines this interval for the delete the unused resources if total cache size exceeds the configured max-size. Unused resources are those resources which are not referenced by any running container. https://community.cloudera.com/t5/Community-Articles/How-to-clear-local-file-cache-and-user-cache-for-yarn/ta-p/245160
... View more
01-14-2020
12:13 PM
@nk_11 HDFS data might not always be distributed uniformly across DataNodes. If the DataNodes are not balancing the data properly then you can run the HDFS Balancer from ambari UI. Ambari UI --> HDFS --> "Service Actions" (Drop Down) --> Rebalance HDFS https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/managing-and-monitoring-ambari/content/amb_rebalance_hdfs_blocks.html As you mentioned that you are also getting "node manager unhealthy alert has the threshold limit is set to 90%" Do you mean the "NodeManager is unhealthy" alert due to local-dirs (or local-dirs are bad errors)? If yes, then it may be because the "yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage" property of YARN config is by default set to 90%. If the utilization in the yarn disk (in this case /data) is above the limit set by yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage, try these options: 1. Free up some disk space (OR) 2. Try to increase the value for "yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage" through Ambari. Followed by NodeManager restart.
... View more
01-08-2020
12:51 PM
@vsrikanth9 From your Output we see that your Operating System is CentOS7 /etc/redhat-release ==> CentOS 7
CentOS Linux release 7.6.1810 (Core) But your repo files seems to be installing package "cloudera-manager-agent-5.16.2-1.cm5162.p0.7.el6.x86_64" NOTICE: We see "el6" in the package name. It is supposed to be "el7" i guess as your Operating System is CentOS7) Looks like you are using a Custom Local Repo for the yum packages from "http://cm.bigdata.com/cloudera-cdh5/" . Please check if it has repo for CentOS6 or CentOS7 binaries. If you are using Cloudera Manager 5.16.2 on CentOS7 then please check if you have copied the correct repo. (CentOS7 repo instead of CentOS6 repo) https://docs.cloudera.com/documentation/enterprise/release-notes/topics/cm_vd.html#cmvd_topic_1 . .
... View more
01-07-2020
04:36 PM
1 Kudo
@philpo While Doing SSH can you try changing the port from 4200 to 2222 Example: # ssh root@127.0.0.1 -p 2222
(OR)
# ssh root@localhost -p 2222 OR try opening the WebTerminal on port 4200 in your Web Browser (4200 is not SSH port it is web terminal port used by service "shellinaboxd" ) http://127.0.0.1:4200
... View more
01-07-2020
01:34 PM
@singm125 Which version of Ambari are you using ? I see that there is a JIRA opened and addressed in Ambari 2.7.4 for addressing the "null" user appearing in "ambari-audit.log" . [JIRA] Ambari audit log shows "null" user when executing an API call : https://issues.apache.org/jira/browse/AMBARI-25234
... View more
01-04-2020
12:36 PM
@Jason4Ever As you are getting this error, which seems to be Access Restriction ... Accessing the requested page requires special permissions. Hence it will be good if you can mail your concerns to customerrelations@cloudera.com. Somebody from our customer relations team should be able to help you with your issues. Reference: https://community.cloudera.com/t5/Support-Announcements/Upgraded-Cloudera-Support-Portal-Launches-This-Weekend/ba-p/282000 .
... View more
01-04-2020
12:11 AM
@Jason4Ever As you are getting "Authentication Error" which indicates some issue with your credentials. So better to file a non-technical case within the support portal (https://my.cloudera.com) to obtain credentials OR to get a clarity on what is wrong with the credentials. Reference: https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-installation/content/ch03s02s01.html Snippet from above doc: Authentication credentials for new customers and partners are provided in an email sent from Cloudera to registered support contacts. Existing users can file a non-technical case within the support portal (<a href="https://my.cloudera.com" target="_blank">https://my.cloudera.com</a>) to obtain credentials. .
... View more