Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2722 | 04-27-2020 03:48 AM | |
| 5283 | 04-26-2020 06:18 PM | |
| 4448 | 04-26-2020 06:05 PM | |
| 3574 | 04-13-2020 08:53 PM | |
| 5377 | 03-31-2020 02:10 AM |
07-04-2018
09:25 AM
1 Kudo
@Michael Bronson You can add those properties from ambari ui something like following: Ambari dasboard.--> YARN -->Configs-->Advanced-->Customer yarn-site --> Click on "Add Property" Then add the following two properties like: (here 10240 value is in MB means around 10 GB) or for 5 MB it can be set to 5120 yarn.nodemanager.localizer.cache.target-size-mb = 10240
yarn.nodemanager.localizer.cache.cleanup.interval-ms = 300000<br> .
... View more
07-04-2018
09:06 AM
2 Kudos
@Michael Bronson Please check the yarn configs Ambari dasboard. --> YARN --> Configs --> Advanced --> Customer yarn-site> Add/find Property And check for the following properties yarn.nodemanager.localizer.cache.target-size-mb: This decides the maximum disk space to be used for localizing resources. (At present there is no individual limit for PRIVATE / APPLICATION / PUBLIC cache. YARN-882). Once the total disk size of the cache exceeds this then Deletion service will try to remove files which are not used by any running containers. At present there is no limit (quota) for user cache / public cache / private cache. This limit is applicable to all the disks as a total and is not based on per disk basis. yarn.nodemanager.localizer.cache.cleanup.interval-ms: After this interval resource localization service will try to delete the unused resources if total cache size exceeds the configured max-size. Unused resources are those resources which are not referenced by any running container. Every time container requests a resource, container is added into the resources’ reference list. It will remain there until container finishes avoiding accidental deletion of this resource. As a part of container resource cleanup (when container finishes) container will be removed from resources’ reference list. That is why when reference count drops to zero it is an ideal candidate for deletion. The resources will be deleted on LRU basis until current cache size drops below target size. . For example please set the value to something like following: yarn.nodemanager.localizer.cache.target-size-mb = 4GB. (or desired)
yarn.nodemanager.localizer.cache.cleanup.interval-ms = 300000 (or desired) Reference: https://hortonworks.com/blog/resource-localization-in-yarn-deep-dive/
... View more
07-04-2018
06:50 AM
@sudhir reddy Can you try switching to "hive" user and then run the hive command? # su - hive
# hive . Or please make sure to have the "/user/root" directory created in your HDFS. # su - hdfs -c "hdfs dfs -mkdir /user/root"
# su - hdfs -c "hdfs dfs -chown root:hadoop /user/root" .
... View more
07-04-2018
04:55 AM
1 Kudo
@Prabin Silwal The latest HDP release is HDP 2.6.5 which supports Ubuntu: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_support-matrices/content/ch01.html
... View more
06-29-2018
04:21 AM
1 Kudo
@Ray Jiang This error indicates that your NameNode is not running. (which is just a symptom) We will need to findout why the NameNode is not running. So can you please share the NameNode logs here so that we can see if there are some errors? (/var/log/hadoop/hdfs/hadoop-hdfs-namenode-sandbox.hortonworks.com.log) . Also can you please try starting the NameNode on your own using command line (instead of using Ambari) in order to isolate if the issue is from Ambari side or the NameNode itself has some issues. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_reference/content/starting_hdp_services.html # su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" . After that please verify the NameNode log (/var/log/hadoop/hdfs/hadoop-hdfs-namenode-sandbox.hortonworks.com.log) and check if the port 50070 is opened or not? # netstat -tnlpa | grep 50070 .
... View more
06-28-2018
12:56 PM
@Victor If this resolved/answers your query/issue then please mark this HCC thread as answered by clicking on "Accept" link on the correct answer, That way it will help other HCC users to quickly find the answers.
... View more
06-28-2018
11:37 AM
@kanna k Please try this (it might or might not work as "portainer" is not a HDP product) So you will have to refer to it's documentation # ./portainer -p :9100 --data /home/myadmin/portainer_data .
... View more
06-28-2018
11:32 AM
@kanna k It indicates that your custom application "./portainer" is already consuming the Port 9000 hence HST server can not listen to that port and that the reason HST server is not coming up. You have two options). 1). Either make the "./portainer" application to listen to some other port ... Or to kill it (Or refer to: http://portainer.readthedocs.io/en/stable/deployment.html ) # kill -9 85623 2). Change the HST server port to something else than 9000 Ambari UI --> SmartSense --> Configs --> Operations --> Web Ui Port (default is 9000) You need to choose between these two options.
... View more
06-28-2018
11:22 AM
@kanna k Please check if that process "portainer" is using port 9000? # netstat -tnlpa | grep 85623 If yes then please try to run that process on a different port to avoid port conflict.
... View more
06-28-2018
11:07 AM
@Victor Please try increasing the value of 'ZEPPELIN_INTERPRETER_OUTPUT_LIMIT' parameter (default value:102400) inside the in zeppelin-env.sh, then it takes the new value. You can set the desired value through Ambari UI --> Configs --> "Advanced zeppelin-env" --> zeppelin_env_content Inside this text area just add value like export ZEPPELIN_INTERPRETER_OUTPUT_LIMIT=2500000 Then restart the Zeppelin Service. To know more about this parameter please refer to https://zeppelin.apache.org/docs/0.7.2/install/configuration.html
... View more