Member since
04-22-2016
3
Posts
0
Kudos Received
0
Solutions
10-19-2016
07:49 PM
As of Ambari 2.2.2 I can run Ambari either on HTTP or HTTPS. I would like to run it on both HTTP and HTTPS? Is it possible to have Ambari server available both on http and https?
... View more
Labels:
- Labels:
-
Apache Ambari
04-27-2016
01:06 AM
@Jonas Straub : >> Regarding Option1: I was under the impression that you can use xxxx-env blocks without defining the whole block within blueprints, e.g. I was using a blueprint recently to add the existing MySQL configuration to "hive-env". But I didn't check hive-env after the blueprint installation via the Ambari API, so I am not sure if the hive-env was messed up or not (I'll run another test in a week or so), Hive did start though. Did you ever get to test this? Seems like it is not working. I tried using defining "content" property for hadoop-env.sh for just HADOOP_NAMENODE_OPTS and HADOOP_DATANODE_OPTS as follows. hadoop-env.sh resulted in just only those two lines and nothing else. "hadoop-env": { "properties": { "content" : "export HADOOP_NAMENODE_OPTS=\"-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC-XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{namenode_opt_newsize}} -XX:MaxNewSize={{namenode_opt_maxnewsize}} -XX:PermSize={{namenode_opt_permsize}} -XX:MaxPermSize={{namenode_opt_maxpermsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{namenode_heapsize}} -Xmx{{namenode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_NAMENODE_OPTS}\"\n export HADOOP_DATANODE_OPTS=\"-server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize=200m-XX:MaxNewSize=200m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{dtnode_heapsize}}-Xmx{{dtnode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_DATANODE_OPTS}\"\n", "namenode_opt_maxnewsize": "361m", "namenode_opt_newsize": "361m", "namenode_heapsize": "2887m", "dtnode_heapsize": "2887m", "dfs_ha_enabled": "true", "hdfs_log_dir_prefix": "/data/var/log/hadoop", "mapred_log_dir_prefix": "/data/var/log/hadoop-mapreduce", "yarn_log_dir_prefix": "/data/var/log/hadoop-yarn", "hive_log_dir": "/data/var/log/hive", "zk_log_dir": "/data/var/log/zookeeper", "metrics_monitor_log_dir": "/data/var/log/ambari-metrics-collector", "metrics_collector_log_dir": "/data/var/log/ambari-metrics-monitor", "kafka_log_dir": "/data/var/log/kafka", "spark_log_dir": "/data/var/log/spark" } }
... View more
04-22-2016
10:08 PM
In ambari 2.2.1.0 this feature is not working on my setup. First I tried with the following settings in my ambari.properties file recovery.lifetime_max_count=4 recovery.retry_interval=1 recovery.max_count=5 recovery.type=AUTO_START recovery.window_in_minutes=20 On rebooting one of the nodes, the node came back up with Ambari agent running and connecting to Ambari Server. But none of the components started. Then added the following setting as well and still no luck. recovery.enabled_components=METRICS_COLLECTOR,NAMENODE,DATANODE,ZKFC,JOURNANODE,RESOURCEMANAGER,NODEMANAGER,APP_TIMELINE_SERVER,ZOOKEEPER_SERVER,HISTORYSERVER,SPARK_JOBHISTORYSERVER
... View more