Member since
04-14-2016
9
Posts
1
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
863 | 07-12-2017 07:28 AM | |
4190 | 01-24-2017 03:04 PM |
07-12-2017
07:28 AM
Forget to upgrade the ambari-agent on the server host. As a result an older version of ambari_commons/inet_utils.py was left in python packages: /usr/lib/python2.6/site-packages/resource_monitoring/ambari_commons/inet_utils.py After upgrading also the agent python package also was also updated.
... View more
07-12-2017
07:07 AM
I'm upgrading Ambari from 2.4.2 to 2.5.1 on HDP 2.4.3, with CentOS 6. I follow the instructions from https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-upgrade/content/upgrade_ambari.html . However when I got to step 9 - ambari-server upgrade I got an error: Using python /usr/bin/python
Upgrading ambari-server
Traceback (most recent call last):
File "/usr/sbin/ambari-server.py", line 39, in <module>
from ambari_server.serverSetup import reset, setup, setup_jce_policy
File "/usr/lib/python2.6/site-packages/ambari_server/serverSetup.py", line 88, in <module>
JDBC_DB_OPTION_VALUES = get_supported_jdbc_drivers()
File "/usr/lib/python2.6/site-packages/ambari_server/serverSetup.py", line 85, in get_supported_jdbc_drivers
factory = DBMSConfigFactory()
File "/usr/lib/python2.6/site-packages/ambari_server/dbConfiguration.py", line 344, in __init__
from ambari_server.dbConfiguration_linux import createPGConfig, createOracleConfig, createMySQLConfig, \
File "/usr/lib/python2.6/site-packages/ambari_server/dbConfiguration_linux.py", line 49, in <module>
from ambari_commons.inet_utils import wait_for_port_opened
ImportError: cannot import name wait_for_port_opened
... View more
Labels:
- Labels:
-
Apache Ambari
01-24-2017
03:04 PM
In the compilation environment for myjar.jar was an old phoenix jar that had in it hbase-client-2.6.jar. After removing it and compling a new jar it was fixed
... View more
01-24-2017
11:20 AM
I'am Running Spark Job that does hbase scan. However I get an error java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Scan.setCacheBlocks(Z)V As I looked it up, it is caused by version mismatch between the hbase-client.jar to hbase version. However I used only hdp compiled jars. My HDP version is 2.4.3.0 I run the sumbit the flowwing way: export HADOOP_CONF_DIR=/etc/hadoop/conf/
export SPARK_CONF_DIR=/etc/spark/conf /usr/hdp/current/spark-client/bin/spark-submit
--class MyClass
--master yarn-cluster
--num-executors 4
--driver-memory 1g
--executor-memory 4g
--executor-cores 6
--conf spark.driver.cores=6
--conf spark.storage.memoryFraction=0.8
--conf spark.shuffle.memoryFraction=0.1
--conf spark.yarn.jar=/usr/hdp/current/spark-client/lib/spark-hdp-assembly.jar
--conf spark.yarn.executor.memoryOverhead=2048
--conf spark.akka.frameSize=100
--conf spark.driver.extraJavaOptions="-Xss10m -XX:MaxPermSize=512M -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC " --conf spark.executor.extraJavaOptions="-Xss10m -XX:MaxPermSize=512M -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC "
--jars /usr/hdp/current/hive-client/lib/hive-common.jar,
/usr/hdp/current/hive-client/lib/hive-hbase-handler.jar,
/usr/hdp/current/hbase-client/lib/hbase-common.jar,
/usr/hdp/current/hbase-client/lib/hbase-server.jar,
/usr/hdp/current/hbase-client/lib/hbase-client.jar,
/usr/hdp/current/hbase-client/lib/hbase-procedure.jar,
/usr/hdp/current/hbase-client/lib/htrace-core-3.1.0-incubating.jar,
/usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar,
/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar,
/usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar,
hdfs://mycluster:8020/lib/java/dependencies/mysql-connector-java-5.0.8-bin.jar
hdfs://mycluster:8020/lib/scala/myjar.jar
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Spark
04-14-2016
02:53 PM
Hey @Predrag Minovic, thanks for the quick replay. How can I add allowed.system.users=yarn,admin , and change the banned.users .
... View more
04-14-2016
12:49 PM
1 Kudo
After upgrading to Ambari 2.2.1 When restarting NodeManager ambari overwrites the container-executor.cfg with a template in file container-executor.cfg.j2 . This template is wrong and causing the yarn to failed any job because the min user is 1000.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache YARN