Created 08-29-2017 04:54 AM
I need to update the following parameters in my old ambari cluster to be most updated ( each parameter have config type)
recovery_enabled fs.trash.interval dfs.datanode.data.dir dfs.namenode.accesstime.precision delete.topic.enable log.retention.bytes spark.history.fs.cleaner.enabled spark.history.fs.cleaner.interval spark.history.fs.cleaner.maxAge spark_daemon_memory spark_thrift_cmd_opts spark.broadcast.blockSize spark.driver.maxResultSize spark.dynamicAllocation.executorIdleTimeout spark.dynamicAllocation.initialExecutors spark.dynamicAllocation.maxExecutors spark.dynamicAllocation.schedulerBacklogTimeout spark.executor.memory spark.files.maxPartitionBytes spark.files.openCostInBytes spark.kryoserializer.buffer.max spark.memory.offHeap.enabled spark.memory.offHeap.size spark.sql.autoBroadcastJoinThreshold spark.sql.shuffle.partitions spark.storage.memoryMapThreshold tez.runtime.io.sort.mb tez.runtime.unordered.output.buffer.size-mb tez.task.resource.memory.mb initLimit syncLimit hive.auto.convert.join.noconditionaltask.size hive.tez.container.size mapreduce.map.java.opts mapreduce.map.memory.mb mapreduce.reduce.java.opts mapreduce.reduce.memory.mb mapreduce.task.io.sort.mb yarn.app.mapreduce.am.command-opts yarn.app.mapreduce.am.resource.mb spark_thrift_cmd_opts resourcemanager_heapsize yarn.nodemanager.resource.cpu-vcores yarn.nodemanager.resource.memory-mb yarn.resourcemanager.am.max-attempts yarn.scheduler.maximum-allocation-mb yarn.scheduler.maximum-allocation-vcores syncLimit
after we set all parameters and with their updated values ( with config.sh script ) , and perform restart on all required services / components
some of the services/components cannot started
what is wrong with my procedure?
list of the config type for the parameters :
cluster-env core-site hdfs-site hive-interactive-site hive-site kafka-broker mapred-site spark2-defaults spark2-env spark2-thrift-sparkconf tez-interactive-site tez-site yarn-env yarn-site zoo.cfg
remark - all the updated parameters already set on other new ambari cluster
Created 08-29-2017 05:18 AM
After setting those parameters , you mentioned that "some of the services/components cannot started".
So can you please check the logs of those services and let us know if you see any error?
As we see that the above mentioned parameters are somewhere related to tuning like Heap and other memory related parameters that differ from environment to environment. So the other clusters values might not be exactly same and applicable for this cluster nodes as well.
Created 08-29-2017 05:18 AM
After setting those parameters , you mentioned that "some of the services/components cannot started".
So can you please check the logs of those services and let us know if you see any error?
As we see that the above mentioned parameters are somewhere related to tuning like Heap and other memory related parameters that differ from environment to environment. So the other clusters values might not be exactly same and applicable for this cluster nodes as well.
Created 08-29-2017 05:27 AM
all the cluster the old and the new are the same except the parameters ! (for sure), so I not understand why on the new cluster with the update json all parameters are update and when we want to do upgrade parametes on the old cluster we get the not start services/components
Created 08-29-2017 05:35 AM
maybe we need to restart after each parameter update ? , or do the update according to config type preiority ?
Created 08-29-2017 05:39 AM
is it possible to know from all the parameters list what is the values from parameters that cause the problems?
Created 08-29-2017 05:43 AM
Created 08-29-2017 10:14 AM
generic question , do we need to do restart after each parameter settings ? or we can do after we set all parameters?