Member since
10-24-2015
171
Posts
379
Kudos Received
23
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2667 | 06-26-2018 11:35 PM | |
4354 | 06-12-2018 09:19 PM | |
2879 | 02-01-2018 08:55 PM | |
1452 | 01-02-2018 09:02 PM | |
6781 | 09-06-2017 06:29 PM |
03-13-2017
08:46 PM
Daniel Kozlowski, Any explanation on why was this error happening on 1st attempt ? And how did it get resolved on interpreter restart ?
... View more
03-07-2017
12:13 AM
1 Kudo
@Viswa, Refer to "2.1 USE TERMINAL TO FIND THE HOST IP SANDBOX RUNS ON" section in below document. https://hortonworks.com/hadoop-tutorial/learning-the-ropes-of-the-hortonworks-sandbox/#explore-ambari From above steps find out the ambari host. Typically 1st node has ambari server running. After figuring out ambari-server host , you can follow steps from 1st comment to see if you can access ambari-sever.
... View more
03-06-2017
11:36 PM
2 Kudos
@Viswa, There can be few possibilities on why couldn't you find ambari-server. 1. Ambari -server is Running but its missing from PATH In this case, can you please check if ambari-server is actually running ? Run ps aux to check if Ambari server is running. [root@xxx ~]# ps aux | grep ambari-server
root 83696 6.2 0.3 17957784 958424 ? Sl 07:02 56:53 /usr/lib/jvm/java-openjdk/bin/java -server -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -XX:CMSInitiatingOccupancyFraction=60 -XX:+CMSClassUnloadingEnabled -Dsun.zip.disableMemoryMapping=true -Xms512m -Xmx2048m -XX:MaxPermSize=128m -Djava.security.auth.login.config=/etcambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -cp /etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/share/java/mysql-connector-java.jar org.apache.ambari.server.controller.AmbariServer
try to find out where is ambari-server installed. In most of the environment, ambari-server can be found at /usr/sbin/ambari-server. find / -name "ambari-server"
Try running ambari-server --help with full path. It should print out message as below. root@xxx ~]# /usr/sbin/ambari-server --help
Using python /usr/bin/python
Usage: /usr/sbin/ambari-server
{start|stop|reset|restart|upgrade|status|upgradestack|setup|setup-jce|setup-ldap|sync-ldap|set-current|setup-security|refresh-stack-hash|backup|restore|update-host-names|check-database|enable-stack|setup-sso|db-cleanup|install-mpack|uninstall-mpack|upgrade-mpack|setup-kerberos} [options]
Use /usr/sbin/ambari-server.py <action> --help to get details on options available.
Or, simply invoke ambari-server.py --help to print the options. In this case, it is likely that /usr/sbin is missing from path. Please add /usr/bin to path with export PATH=$PATH:/usr/sbin 2. Ambari server failing Check if ambari server is running, with ps -aux | grep ambari. If you do not find ambari server process, please check a log for ambari-server. You can find ambari server logs at /var/log/ambari-server/ambari-server.log
... View more
03-06-2017
08:55 PM
2 Kudos
@Alistair McIntyre, I'm noticing that Livy has the feature to stop Spark application after a certain timeout configured in livy.server.session.timeout property. Find the apache Jira tracking similar issue at https://issues.apache.org/jira/browse/ZEPPELIN-1293. You can set up a livy interpreter and run all your spark notebooks with livy.spark interpreter. Find links to install and configure livy interpreter as below. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_zeppelin-component-guide/content/install-livy.html https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_zeppelin-component-guide/content/config-livy-interp.html https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_zeppelin-component-guide/content/zepp-with-spark.html https://community.hortonworks.com/articles/80059/how-to-configure-zeppelin-livy-interpreter-for-sec.html
... View more
03-03-2017
11:36 PM
4 Kudos
@Will Dailey, The JVM heap can be preconfigured for memory boundaries – the initial heap size (defined by the –Xms option) and the maximum heap size (defined by the –Xmx option). Used memory from the perspective of the JVM is Working set + Garbage. The Committed memory is a measure of how much memory the JVM heap is really consuming. If ( –Xms < -Xmx ) and (Used memory == Current heap size), the JVM is likely to grow its heap after a full garbage collection. However if ( –Xms or Current heap size == -Xmx ), either through heap growth or configuration, the heap cannot grow any further. Both of these Values are important to debug datanode failure related Out of Memory, less PermGen Errors, GC errors. Reference link : https://pubs.vmware.com/vfabric52/index.jsp?topic=/com.vmware.vfabric.em4j.1.2/em4j/conf-heap-management.html
... View more
02-15-2017
09:00 PM
1 Kudo
Glad to know I was able to help.
... View more
02-15-2017
12:41 AM
2 Kudos
@Vijay Lakshman, looks like the USER env variable got missing from the machines. Can you please check on your hosts whether $USER is set for all the users ( such as hdfs, yarn, mapred etc ). You can also use "printenv" to print all env var. [root@xxx]# sudo su hdfs
bash-4.2$ echo $USER
hdfs
... View more
02-14-2017
12:14 AM
2 Kudos
@Vijay Lakshman, how did you restart the mapred history server after machine reboot ? Did you start it through ambari ? Can you also check the value of HADOOP_MAPRED_PID_DIR var in hadoop-env.sh ? ideally it should be as below. export HADOOP_MAPRED_PID_DIR=/var/run/hadoop-mapreduce/$USER In order to fix this issue, stop the mapred-history server and make sure /var/run/hadoop-mapreduce/mapred--historyserver.pid and /var/run/hadoop-mapreduce/mapred/mapred-mapred-history-server.pid is deleted. Then start mapred history server through ambari.
... View more
02-11-2017
05:28 AM
1 Kudo
@Shigeru Takehara, You can definitely have different nodes with different memory/cpu in a cluster. You can have a 5GB memory and 2 Cores machine as one of your nodes without changing max to 5GB globally. You can set yarn.nodemanager.resource,memory-mb=20000 on machines with 20gb memory and set yarn.nodemanager.resource,memory-mb=5000 on machine with 5gb memory. You can also manage different configuration on different node managers using ambari. Its called host config groups. https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-user-guide/content/using_host_config_groups.html
... View more
02-11-2017
12:44 AM
1 Kudo
@Shigeru Takehara, You are correct. Yarn reads the memory/vcore related information from yarn configuration only. Typically, admin is responsible to specify correct memory/vcore data to yarn. In a hadoop cluster, a node is shared with multiple services like datanode, region server etc. Suppose a node has 36GB of disk and it has 3 daemons running such as data node, node manager and region server. An admin may choose to give all 3 services equal memory. In this case, admin will need to update yarn-site.xml to have "yarn.nodemanager.resource,memory-mb=12000". Same goes for vcore. Refer to http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/ to understand how to configure yarn memory correctly. You can use Ambari to install the cluster. It has feature called stack advisor which will set up the cluster with recommended configs. It will actually query the hosts and gather all necessary data such as disk space, RAM etc and depending on what daemons are configured on a host, it will set up the configuration.
... View more
- « Previous
- Next »