Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1916 | 06-15-2020 05:23 AM | |
| 15448 | 01-30-2020 08:04 PM | |
| 2068 | 07-07-2019 09:06 PM | |
| 8099 | 01-27-2018 10:17 PM | |
| 4566 | 12-31-2017 10:12 PM |
09-04-2017
03:58 PM
is it possible to verify that all services and components are enabled by API ? ( not from log )
... View more
09-04-2017
03:55 PM
I run the API but from ambari GUI I not see any diff ( is like all services are disabled )
... View more
09-04-2017
02:37 PM
Ambari services can be configured to start automatically on system boot. Each service can be configured to start all components, masters and workers, or selectively. so how to enable all services in ambari cluster to start automatically on system boot by API ?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
09-01-2017
07:20 AM
when I run/start the thrift server su spark ./sbin/start-thriftserver.sh or ./sbin/start-thriftserver.sh --master yarn-client --executor-memory 512m --hiveconf hive.server2.thrift.port=100015 I get from the log: /var/log/spark2/spark-spark-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master.out
Spark Command: /usr/jdk64/jdk1.8.0_112/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/2.6.0.3-8/spark2/conf/:/usr/hdp/2.6.0.3-8/spark2/jars/*:/usr/hdp/current/hadoop-client/conf/
-Xmx10000m org.apache.spark.deploy.SparkSubmit --master yarn-client --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift JDBC/ODBC Server --executor-mem
ory 512m spark-internal --hiveconf hive.server2.thrift.port=100015
========================================
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
17/09/01 07:02:05 WARN AbstractLifeCycle: FAILED ServerConnector@1fc793c2{HTTP/1.1}{0.0.0.0:4040}: java.net.BindException: Address already in use
java.net.BindException: Address already in use how to solve it?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Spark
08-29-2017
10:14 AM
generic question , do we need to do restart after each parameter settings ? or we can do after we set all parameters?
... View more
08-29-2017
10:13 AM
generic question , do we need to do restart after each parameter settings ? or we can do after we set all parameters?
... View more
08-29-2017
05:39 AM
is it possible to know from all the parameters list what is the values from parameters that cause the problems?
... View more
08-29-2017
05:35 AM
maybe we need to restart after each parameter update ? , or do the update according to config type preiority ?
... View more
08-29-2017
05:27 AM
all the cluster the old and the new are the same except the parameters ! (for sure), so I not understand why on the new cluster with the update json all parameters are update and when we want to do upgrade parametes on the old cluster we get the not start services/components
... View more
08-29-2017
04:54 AM
I need to update the following parameters in my old ambari cluster to be most updated ( each parameter have config type) recovery_enabled
fs.trash.interval
dfs.datanode.data.dir
dfs.namenode.accesstime.precision
delete.topic.enable
log.retention.bytes
spark.history.fs.cleaner.enabled
spark.history.fs.cleaner.interval
spark.history.fs.cleaner.maxAge
spark_daemon_memory
spark_thrift_cmd_opts
spark.broadcast.blockSize
spark.driver.maxResultSize
spark.dynamicAllocation.executorIdleTimeout
spark.dynamicAllocation.initialExecutors
spark.dynamicAllocation.maxExecutors
spark.dynamicAllocation.schedulerBacklogTimeout
spark.executor.memory
spark.files.maxPartitionBytes
spark.files.openCostInBytes
spark.kryoserializer.buffer.max
spark.memory.offHeap.enabled
spark.memory.offHeap.size
spark.sql.autoBroadcastJoinThreshold
spark.sql.shuffle.partitions
spark.storage.memoryMapThreshold
tez.runtime.io.sort.mb
tez.runtime.unordered.output.buffer.size-mb
tez.task.resource.memory.mb
initLimit
syncLimit
hive.auto.convert.join.noconditionaltask.size
hive.tez.container.size
mapreduce.map.java.opts
mapreduce.map.memory.mb
mapreduce.reduce.java.opts
mapreduce.reduce.memory.mb
mapreduce.task.io.sort.mb
yarn.app.mapreduce.am.command-opts
yarn.app.mapreduce.am.resource.mb
spark_thrift_cmd_opts
resourcemanager_heapsize
yarn.nodemanager.resource.cpu-vcores
yarn.nodemanager.resource.memory-mb
yarn.resourcemanager.am.max-attempts
yarn.scheduler.maximum-allocation-mb
yarn.scheduler.maximum-allocation-vcores
syncLimit after we set all parameters and with their updated values ( with config.sh script ) , and perform restart on all required services / components some of the services/components cannot started what is wrong with my procedure? list of the config type for the parameters : cluster-env
core-site
hdfs-site
hive-interactive-site
hive-site
kafka-broker
mapred-site
spark2-defaults
spark2-env
spark2-thrift-sparkconf
tez-interactive-site
tez-site
yarn-env
yarn-site
zoo.cfg remark - all the updated parameters already set on other new ambari cluster
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop