Created 04-29-2019 06:18 PM
I'm new to the Hortonworks Hadoop stack as I inherited a cluster from a colleague who quit. I have some basic understanding of the platform. There is no documentation on how the colleague set the cluster up. I recently had to reboot some of our host machines.
The problem is that these machines aren't sending heartbeats anymore although there are positive entries in the log at /var/log/ambari-agent/ambari-agent.log:
INFO 2019-04-29 18:03:09,862 __init__.py:57 - Event from server at /user/ (correlation_id=371): {u'status': u'OK'} INFO 2019-04-29 18:03:19,560 security.py:135 - Event to server at /heartbeat (correlation_id=372): {'id': 278} INFO 2019-04-29 18:03:19,563 __init__.py:57 - Event from server at /user/ (correlation_id=372): {u'status': u'OK', u'id': 279} INFO 2019-04-29 18:03:29,564 security.py:135 - Event to server at /heartbeat (correlation_id=373): {'id': 279} INFO 2019-04-29 18:03:29,567 __init__.py:57 - Event from server at /user/ (correlation_id=373): {u'status': u'OK', u'id': 280} INFO 2019-04-29 18:03:39,569 security.py:135 - Event to server at /heartbeat (correlation_id=374): {'id': 280} INFO 2019-04-29 18:03:39,572 __init__.py:57 - Event from server at /user/ (correlation_id=374): {u'status': u'OK', u'id': 281}
I tried everything from ambari-server/ambari-agent restart to manually restarting the node manager /usr/hdp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh start nodemanager
Nothing helps. It's also strange that in the YARN dashboard under components I get "0/8 started nodemanagers" and on the same page "nodemanager status 8 active". Can someone point me in the right direction? Thanks in advance
Created 04-29-2019 11:15 PM
Can you share this data so that we can find out what might be wrong.
1. Restart ambari-agent on a problematic host and then collect the full log after it got restarted for 3-5 minutes after restart.
2. Also please share the output of the following commands:
# hostname -f # cat /etc/hosts # cat /etc/ambari-agent/conf/ambari-agent.ini | grep 'hostname = '
3. From Agent host are you able to make this call to the Ambari FQDN to see if you are able to connect to correct Ambari Server on port 8440 from problematic agent host. Or if the Ambari server / Agent FQDN got changed?
# telnet $AMBARI_FQDN 8440 (OR) # nc -v $AMBARI_FQDN 8440
.
4. When you restart ambari-agent please collect few minutes log of ambari-server as we for the same time when you tried to restart the agent so that we can see if the ambari-agent is sending the registration request/ Heartbeat properly to the Ambari Server correctly or not?
Created 05-01-2019 03:18 PM
hostname -f gives just the name (i.e. not FQDN) of the host
cat /etc/hosts shows:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
The hostname entry in ambari-agent.ini is the proper FQDN of the ambari host
nc -v $AMBARI_FQDN 8440 shows that I cann connect to ambari.
Here is the output of ambari-server.log after I restart the slave. I redacted some names:
2019-04-30 09:48:28,210 WARN [qtp-ambari-agent-54683] SecurityFilter:103 - Request https://{ambari-host}:8440/ca doesn't match any pattern. 2019-04-30 09:48:28,211 WARN [qtp-ambari-agent-54683] SecurityFilter:62 - This request is not allowed on this port: https://{ambari-host}:8440/ca 2019-04-30 09:48:30,417 INFO [agent-register-processor-6] HeartBeatHandler:317 - agentOsType = centos7 2019-04-30 09:48:30,420 INFO [agent-register-processor-6] HostImpl:346 - Received host registration, host=[hostname={name-of-slave},fqdn={name-of-slave},domain=,architecture=x86_64,processorcount=8,physicalprocessorcount=8,osname=centos,osversion=7.5.1804,osfamily=redhat,memory=98836260,uptime_hours=282,mounts=(available=649016328,mountpoint=/,used=314595320,percent=33%,size=963611648,device=/dev/mapper/centos-root,type=xfs)] , registrationTime=1556610510417, agentVersion=2.7.0.0 2019-04-30 09:48:30,420 INFO [agent-register-processor-6] TopologyManager:643 - TopologyManager.onHostRegistered: Entering 2019-04-30 09:48:30,420 INFO [agent-register-processor-6] TopologyManager:697 - Host {name-of-slave} re-registered, will not be added to the available hosts list 2019-04-30 09:49:07,302 INFO [MessageBroker-1] WebSocketMessageBrokerStats:113 - WebSocketSession[1 current WS(1)-HttpStream(0)-HttpPoll(0), 5 total, 0 closed abnormally (0 connect failure, 0 send limit, 0 transport error)], stompSubProtocol[processed CONNECT(5)-CONNECTED(5)-DISCONNECT(0)], stompBrokerRelay[null], inboundChannel[pool size = 16, active threads = 0, queued tasks = 0, completed tasks = 19257], outboundChannelpool size = 6, active threads = 0, queued tasks = 0, completed tasks = 6702], sockJsScheduler[pool size = 8, active threads = 1, queued tasks = 0, completed tasks = 37] 2019-04-30 09:49:07,756 INFO [MessageBroker-1] WebSocketMessageBrokerStats:113 - WebSocketSession[9 current WS(9)-HttpStream(0)-HttpPoll(0), 17 total, 0 closed abnormally (0 connect failure, 0 send limit, 8 transport error)], stompSubProtocol[processed CONNECT(0)-CONNECTED(17)-DISCONNECT(8)], stompBrokerRelay[null], inboundChannel[pool size = 10, active threads = 0, queued tasks = 0, completed tasks = 206349], outboundChannelpool size = 10, active threads = 0, queued tasks = 0, completed tasks = 68647], sockJsScheduler[pool size = 8, active threads = 1, queued tasks = 0, completed tasks = 37]
Is it normal that there is no proper FQDN of the slave in the logs?
Here is the output of my ambari-agent.log after restart:
INFO 2019-04-30 09:48:23,847 main.py:155 - loglevel=logging.INFO INFO 2019-04-30 09:48:23,850 ClusterCache.py:125 - Rewriting cache ClusterMetadataCache for cluster 2 INFO 2019-04-30 09:48:23,850 ClusterCache.py:125 - Rewriting cache ClusterMetadataCache for cluster -1 INFO 2019-04-30 09:48:23,856 ClusterCache.py:125 - Rewriting cache ClusterTopologyCache for cluster 2 INFO 2019-04-30 09:48:23,900 ClusterCache.py:125 - Rewriting cache ClusterConfigurationCache for cluster 2 INFO 2019-04-30 09:48:23,935 Hardware.py:68 - Initializing host system information. INFO 2019-04-30 09:48:23,943 Hardware.py:188 - Some mount points were ignored: /dev, /dev/shm, /run, /sys/fs/cgroup, /boot INFO 2019-04-30 09:48:23,961 Facter.py:202 - Directory: '/etc/resource_overrides' does not exist - it won't be used for gathering system resources. INFO 2019-04-30 09:48:23,966 Hardware.py:73 - Host system information: {'kernel': 'Linux', 'domain': '', 'physicalprocessorcount': 8, 'kernelrelease': '3.10.0-862.9.1.el7.x86_64', 'uptime_days': '11', 'memorytotal': 98836260, 'swapfree': '4.00 GB', 'memorysize': 98836260, 'osfamily': 'redhat', 'swapsize': '4.00 GB', 'processorcount': 8, 'netmask': '255.255.254.0', 'timezone': 'CET', 'hardwareisa': 'x86_64', 'memoryfree': 97365576, 'operatingsystem': 'centos', 'kernelmajversion': '3.10', 'kernelversion': '3.10.0', 'macaddress': '02:A8:0C:F1:01:EC', 'operatingsystemrelease': '7.5.1804', 'ipaddress': '{ip-addr}', 'hostname': '{clustername}hdpslave02', 'uptime_hours': '282', 'fqdn': '{clustername}hdpslave02', 'id': 'root', 'architecture': 'x86_64', 'selinux': True, 'mounts': [{'available': '649015888', 'used': '314595760', 'percent': '33%', 'device': '/dev/mapper/centos-root', 'mountpoint': '/', 'type': 'xfs', 'size': '963611648'}], 'hardwaremodel': 'x86_64', 'uptime_seconds': '1017347', 'interfaces': 'lo,eth0'} WARNING 2019-04-30 09:48:23,967 shell.py:822 - can not switch user for RUN_COMMAND. INFO 2019-04-30 09:48:23,972 HeartbeatHandlers.py:82 - Ambari-agent received 15 signal, stopping... WARNING 2019-04-30 09:48:23,973 shell.py:822 - can not switch user for RUN_COMMAND. INFO 2019-04-30 09:48:23,974 ActionQueue.py:182 - ActionQueue thread has successfully finished INFO 2019-04-30 09:48:23,985 HostStatusReporter.py:62 - HostStatusReporter has successfully finished INFO 2019-04-30 09:48:23,996 CommandStatusReporter.py:51 - CommandStatusReporter has successfully finished INFO 2019-04-30 09:48:24,003 AlertStatusReporter.py:75 - AlertStatusReporter has successfully finished INFO 2019-04-30 09:48:24,004 ComponentStatusExecutor.py:114 - ComponentStatusExecutor has successfully finished INFO 2019-04-30 09:48:24,008 transport.py:358 - Receiver loop ended INFO 2019-04-30 09:48:24,009 HeartbeatThread.py:113 - HeartbeatThread has successfully finished INFO 2019-04-30 09:48:24,009 ExitHelper.py:57 - Performing cleanup before exiting... INFO 2019-04-30 09:48:24,011 AlertSchedulerHandler.py:159 - [AlertScheduler] Stopped the alert scheduler. INFO 2019-04-30 09:48:24,011 AlertSchedulerHandler.py:159 - [AlertScheduler] Stopped the alert scheduler. INFO 2019-04-30 09:48:24,011 ExitHelper.py:71 - Cleanup finished, exiting with code:0 WARNING 2019-04-30 09:48:24,079 shell.py:822 - can not switch user for RUN_COMMAND. INFO 2019-04-30 09:48:24,086 main.py:308 - Agent died gracefully, exiting. INFO 2019-04-30 09:48:24,087 ExitHelper.py:57 - Performing cleanup before exiting... INFO 2019-04-30 09:48:24,087 AlertSchedulerHandler.py:159 - [AlertScheduler] Stopped the alert scheduler. INFO 2019-04-30 09:48:24,088 AlertSchedulerHandler.py:159 - [AlertScheduler] Stopped the alert scheduler. INFO 2019-04-30 09:48:24,536 main.py:155 - loglevel=logging.INFO INFO 2019-04-30 09:48:24,539 ClusterCache.py:125 - Rewriting cache ClusterMetadataCache for cluster 2 INFO 2019-04-30 09:48:24,539 ClusterCache.py:125 - Rewriting cache ClusterMetadataCache for cluster -1 INFO 2019-04-30 09:48:24,546 ClusterCache.py:125 - Rewriting cache ClusterTopologyCache for cluster 2 INFO 2019-04-30 09:48:24,588 ClusterCache.py:125 - Rewriting cache ClusterConfigurationCache for cluster 2 INFO 2019-04-30 09:48:24,626 Hardware.py:68 - Initializing host system information. INFO 2019-04-30 09:48:24,633 Hardware.py:188 - Some mount points were ignored: /dev, /dev/shm, /run, /sys/fs/cgroup, /boot INFO 2019-04-30 09:48:24,651 Facter.py:202 - Directory: '/etc/resource_overrides' does not exist - it won't be used for gathering system resources. INFO 2019-04-30 09:48:24,655 Hardware.py:73 - Host system information: {'kernel': 'Linux', 'domain': '', 'physicalprocessorcount': 8, 'kernelrelease': '3.10.0-862.9.1.el7.x86_64', 'uptime_days': '11', 'memorytotal': 98836260, 'swapfree': '4.00 GB', 'memorysize': 98836260, 'osfamily': 'redhat', 'swapsize': '4.00 GB', 'processorcount': 8, 'netmask': '255.255.254.0', 'timezone': 'CET', 'hardwareisa': 'x86_64', 'memoryfree': 97388304, 'operatingsystem': 'centos', 'kernelmajversion': '3.10', 'kernelversion': '3.10.0', 'macaddress': '02:A8:0C:F1:01:EC', 'operatingsystemrelease': '7.5.1804', 'ipaddress': '{ip-addr}', 'hostname': '{clustername}hdpslave02', 'uptime_hours': '282', 'fqdn': '{clustername}hdpslave02', 'id': 'root', 'architecture': 'x86_64', 'selinux': True, 'mounts': [{'available': '649016328', 'used': '314595320', 'percent': '33%', 'device': '/dev/mapper/centos-root', 'mountpoint': '/', 'type': 'xfs', 'size': '963611648'}], 'hardwaremodel': 'x86_64', 'uptime_seconds': '1017348', 'interfaces': 'lo,eth0'} INFO 2019-04-30 09:48:24,658 DataCleaner.py:39 - Data cleanup thread started INFO 2019-04-30 09:48:24,659 DataCleaner.py:120 - Data cleanup started INFO 2019-04-30 09:48:24,661 DataCleaner.py:122 - Data cleanup finished INFO 2019-04-30 09:48:24,690 PingPortListener.py:50 - Ping port listener started on port: 8670 INFO 2019-04-30 09:48:24,693 main.py:481 - Connecting to Ambari server at https://{ambari-host}:8440 ({ip-addr}2) INFO 2019-04-30 09:48:24,693 NetUtil.py:61 - Connecting to https://{ambari-host}:8440/ca INFO 2019-04-30 09:48:24,749 main.py:491 - Connected to Ambari server {ambari-host} INFO 2019-04-30 09:48:24,749 AlertSchedulerHandler.py:149 - [AlertScheduler] Starting <ambari_agent.apscheduler.scheduler.Scheduler object at 0x7fd350979450>; currently running: False INFO 2019-04-30 09:48:24,751 NetUtil.py:61 - Connecting to https://{ambari-host}:8440/connection_info INFO 2019-04-30 09:48:24,805 security.py:61 - Connecting to wss://{ambari-host}:8441/agent/stomp/v1 INFO 2019-04-30 09:48:24,862 transport.py:329 - Starting receiver loop INFO 2019-04-30 09:48:24,863 security.py:67 - SSL connection established. Two-way SSL authentication is turned off on the server. INFO 2019-04-30 09:48:26,949 hostname.py:106 - Read public hostname '{clustername}hdpslave02' using socket.getfqdn() INFO 2019-04-30 09:48:26,949 HeartbeatThread.py:125 - Sending registration request INFO 2019-04-30 09:48:26,950 security.py:135 - Event to server at /register (correlation_id=0): {'currentPingPort': 8670, 'timestamp': 1556610504864, 'hostname': '{clustername}hdpslave02', 'publicHostname': '{clustername}hdpslave02', 'hardwareProfile': {'kernel': 'Linux', 'domain': '', 'kernelrelease': '3.10.0-862.9.1.el7.x86_64', 'uptime_days': '11', 'memorytotal': 98836260, 'swapfree': '4.00 GB', 'processorcount': 8, 'selinux': True, 'timezone': 'CET', 'hardwareisa': 'x86_64', 'operatingsystem': 'centos', 'hostname': '{clustername}hdpslave02', 'id': 'root', 'memoryfree': 97388304, 'hardwaremodel': 'x86_64', 'uptime_seconds': '1017348', 'osfamily': 'redhat', 'physicalprocessorcount': 8, 'interfaces': 'lo,eth0', 'memorysize': 98836260, 'swapsize': '4.00 GB', 'netmask': '255.255.254.0', 'ipaddress': '{ip-addr}', 'kernelmajversion': '3.10', 'kernelversion': '3.10.0', 'macaddress': '02:A8:0C:F1:01:EC', 'operatingsystemrelease': '7.5.1804', 'uptime_hours': '282', 'fqdn': '{clustername}hdpslave02', 'architecture': 'x86_64', 'mounts': [{'available': '649016328', 'used': '314595320', 'percent': '33%', 'device': '/dev/mapper/centos-root', 'mountpoint': '/', 'type': 'xfs', 'size': '963611648'}]}, 'agentEnv': {'transparentHugePage': '', 'hostHealth': {'agentTimeStampAtReporting': 1556610504943, 'liveServices': [{'status': 'Unhealthy', 'name': 'chronyd', 'desc': '\xe2\x97\x8f ntpd.service - Network Time Service\n Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)\n Active: inactive (dead)\n\xe2\x97\x8f chronyd.service - NTP client/server\n Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)\n Active: inactive (dead)\n Docs: man:chronyd(8)\n man:chrony.conf(5)'}]}, 'reverseLookup': True, 'umask': '18', 'hasUnlimitedJcePolicy': None, 'alternatives': [], 'firewallName': 'iptables', 'stackFoldersAndFiles': [{'type': 'directory', 'name': '/etc/hadoop'}, {'type': 'directory', 'name': '/etc/hbase'}, {'type': 'directory', 'name': '/etc/zookeeper'}, {'type': 'directory', 'name': '/etc/ambari-metrics-monitor'}, {'type': 'directory', 'name': '/var/run/hadoop-yarn'}, {'type': 'directory', 'name': '/var/log/hadoop'}, {'type': 'directory', 'name': '/var/log/hbase'}, {'type': 'directory', 'name': '/var/log/zookeeper'}, {'type': 'directory', 'name': '/var/log/hadoop-yarn'}, {'type': 'directory', 'name': '/var/log/hadoop-mapreduce'}, {'type': 'directory', 'name': '/var/log/ambari-metrics-monitor'}, {'type': 'directory', 'name': '/usr/lib/flume'}, {'type': 'directory', 'name': '/usr/lib/storm'}, {'type': 'directory', 'name': '/var/lib/hadoop-hdfs'}, {'type': 'directory', 'name': '/var/lib/hadoop-yarn'}, {'type': 'directory', 'name': '/var/lib/hadoop-mapreduce'}, {'type': 'directory', 'name': '/var/lib/ambari-metrics-monitor'}, {'type': 'directory', 'name': '/tmp/hadoop-hdfs'}, {'type': 'directory', 'name': '/hadoop/hdfs'}, {'type': 'directory', 'name': '/hadoop/yarn'}], 'existingUsers': [{'status': 'Available', 'name': 'hive', 'homeDir': '/home/hive'}, {'status': 'Available', 'name': 'ambari-qa', 'homeDir': '/home/ambari-qa'}, {'status': 'Available', 'name': 'hbase', 'homeDir': '/var/run/hbase'}, {'status': 'Available', 'name': 'mapred', 'homeDir': '/home/mapred'}, {'status': 'Available', 'name': 'hdfs', 'homeDir': '/home/hdfs'}, {'status': 'Available', 'name': 'zookeeper', 'homeDir': '/home/zookeeper'}, {'status': 'Available', 'name': 'yarn', 'homeDir': '/home/yarn'}, {'status': 'Available', 'name': 'tez', 'homeDir': '/home/tez'}, {'status': 'Available', 'name': 'knox', 'homeDir': '/home/knox'}, {'status': 'Available', 'name': 'ams', 'homeDir': '/home/ams'}, {'status': 'Available', 'name': 'spark', 'homeDir': '/home/spark'}, {'status': 'Available', 'name': 'zeppelin', 'homeDir': '/home/zeppelin'}], 'firewallRunning': False}, 'prefix': '/var/lib/ambari-agent/data', 'agentVersion': '2.7.0.0', 'agentStartTime': 1556610504656, 'id': -1} INFO 2019-04-30 09:48:26,961 __init__.py:57 - Event from server at /user/ (correlation_id=0): {u'status': u'OK', u'exitstatus': 0, u'id': 0} INFO 2019-04-30 09:48:26,968 HeartbeatThread.py:130 - Registration response received INFO 2019-04-30 09:48:26,968 security.py:135 - Event to server at /agents/topologies (correlation_id=1): {'hash': '76930484f50ac1fd52349852d0bca163931160e27b505eec934c15b2b8f2e02b7f5d89e063679da104cd0a55a1f7d14462fc1ea472e04d819b9e50979983352e'} INFO 2019-04-30 09:48:26,978 __init__.py:57 - Event from server at /user/ (correlation_id=1): {u'eventType': u'CREATE', u'hash': u'6395f6cab4a62096a0dce7d5088314509492b19cbcdca926759070410e671260fd2b1cfb5cb3de10d80249eefe03fc0d2e9c3945cb57a009f1ca6769848df35c', u'clusters': {u'2': {u'hosts': [{u'rackName': u'/default-rack', u'hostName': u'{slave-host}', u'ipv4': u'{ip-addr}', u'hostId': 1}, {u'rackName': u'/default-rack', u'hostName': u'{edge-host}', u'ipv4': u'{ip-addr}1', u'hostId': 2}, {u'rackName': u'/default-rack', u'hostName': u'{slave-host}', u'ipv4': u'{ip-addr}0', u'hostId': 3}, {u'rackName': u'/default-rack', u'hostName': u'{slave-host}', u'ipv4': u'{ip-addr}', u'hostId': 4}, {u'rackName': u'/default-rack', u'hostName': u'{slave-host}', u'ipv4': u'{ip-addr}', u'hostId': 5}, {u'rackName': u'/default-rack', u'hostName': u'{host}', u'ipv4': u'{ip-addr}', u'hostId': 6}, {u'rackName': u'/default-rack', u'hostName': u'{host}', u'ipv4': u'{ip-addr}', u'hostId': 7}, {u'rackName': u'/default-rack', u'hostName': u'{slave-host}', u'ipv4': u'{ip-addr}', u'hostId': 8}, {u'rackName': u'/default-rack', u'hostName': u'{host}', u'ipv4': u'{ip-addr}', u'hostId': 9}], u'components': [{u'commandParams': {u'command_timeout': u'600', u'script': u'scripts/hst_server.py', u'service_package_folder': u'stacks/HDP/3.0/services/SMARTSENSE/package', u'script_type': u'PYTHON'}, u'componentName': u'HST_SERVER', u'serviceName': u'SMARTSENSE', u'componentLevelParams': {u'unlimited_key_jce_required': u'true', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [7]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/timelinereader.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/YARN/package', u'script_type': u'PYTHON'}, u'componentName': u'TIMELINE_READER', u'serviceName': u'YARN', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [9]}, {u'commandParams': {u'command_timeout': u'1800', u'script': u'scripts/namenode.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/HDFS/package', u'script_type': u'PYTHON'}, u'componentName': u'NAMENODE', u'serviceName': u'HDFS', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [9]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/yarn_registry_dns.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/YARN/package', u'script_type': u'PYTHON'}, u'componentName': u'YARN_REGISTRY_DNS', u'serviceName': u'YARN', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [9]}, {u'commandParams': {u'command_timeout': u'600', u'script': u'scripts/activity_analyzer.py', u'service_package_folder': u'stacks/HDP/3.0/services/SMARTSENSE/package', u'script_type': u'PYTHON'}, u'componentName': u'ACTIVITY_ANALYZER', u'serviceName': u'SMARTSENSE', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [7]}, {u'commandParams': {u'command_timeout': u'900', u'script': u'scripts/mysql_server.py', u'service_package_folder': u'stacks/HDP/3.0/services/HIVE/package', u'script_type': u'PYTHON'}, u'componentName': u'MYSQL_SERVER', u'serviceName': u'HIVE', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'[]'}, u'hostIds': [6]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/metrics_monitor.py', u'service_package_folder': u'stacks/HDP/3.0/services/AMBARI_METRICS/package', u'script_type': u'PYTHON'}, u'componentName': u'METRICS_MONITOR', u'serviceName': u'AMBARI_METRICS', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [1, 2, 3, 4, 5, 6, 7, 8, 9]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/nodemanager.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/YARN/package', u'script_type': u'PYTHON'}, u'componentName': u'NODEMANAGER', u'serviceName': u'YARN', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [1, 3, 4, 5, 8]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/application_timeline_server.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/YARN/package', u'script_type': u'PYTHON'}, u'componentName': u'APP_TIMELINE_SERVER', u'serviceName': u'YARN', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [9]}, {u'commandParams': {u'command_timeout': u'600', u'script': u'scripts/activity_explorer.py', u'service_package_folder': u'stacks/HDP/3.0/services/SMARTSENSE/package', u'script_type': u'PYTHON'}, u'componentName': u'ACTIVITY_EXPLORER', u'serviceName': u'SMARTSENSE', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [7]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/zookeeper_server.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/ZOOKEEPER/package', u'script_type': u'PYTHON'}, u'componentName': u'ZOOKEEPER_SERVER', u'serviceName': u'ZOOKEEPER', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [6, 7, 9]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/yarn_client.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/YARN/package', u'script_type': u'PYTHON'}, u'componentName': u'YARN_CLIENT', u'serviceName': u'YARN', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [2, 6, 7]}, {u'commandParams': {u'command_timeout': u'600', u'script': u'scripts/job_history_server.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/SPARK2/package', u'script_type': u'PYTHON'}, u'componentName': u'SPARK2_JOBHISTORYSERVER', u'serviceName': u'SPARK2', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [7]}, {u'commandParams': {u'command_timeout': u'900', u'script': u'scripts/zookeeper_client.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/ZOOKEEPER/package', u'script_type': u'PYTHON'}, u'componentName': u'ZOOKEEPER_CLIENT', u'serviceName': u'ZOOKEEPER', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [2, 6]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/snamenode.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/HDFS/package', u'script_type': u'PYTHON'}, u'componentName': u'SECONDARY_NAMENODE', u'serviceName': u'HDFS', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [6]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/hive_metastore.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/HIVE/package', u'script_type': u'PYTHON'}, u'componentName': u'HIVE_METASTORE', u'serviceName': u'HIVE', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'[]'}, u'hostIds': [6]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/resourcemanager.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/YARN/package', u'script_type': u'PYTHON'}, u'componentName': u'RESOURCEMANAGER', u'serviceName': u'YARN', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [9]}, {u'commandParams': {u'command_timeout': u'900', u'script': u'scripts/hive_client.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/HIVE/package', u'script_type': u'PYTHON'}, u'componentName': u'HIVE_CLIENT', u'serviceName': u'HIVE', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [2, 7]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/mapreduce2_client.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/YARN/package', u'script_type': u'PYTHON'}, u'componentName': u'MAPREDUCE2_CLIENT', u'serviceName': u'MAPREDUCE2', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [2, 6, 7]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/tez_client.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/TEZ/package', u'script_type': u'PYTHON'}, u'componentName': u'TEZ_CLIENT', u'serviceName': u'TEZ', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [2, 6, 7, 9]}, {u'commandParams': {u'command_timeout': u'600', u'script': u'scripts/spark_client.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/SPARK2/package', u'script_type': u'PYTHON'}, u'componentName': u'SPARK2_CLIENT', u'serviceName': u'SPARK2', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [2, 7]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/knox_gateway.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/KNOX/package', u'script_type': u'PYTHON'}, u'componentName': u'KNOX_GATEWAY', u'serviceName': u'KNOX', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [2]}, {u'commandParams': {u'command_timeout': u'600', u'script': u'scripts/livy2_server.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/SPARK2/package', u'script_type': u'PYTHON'}, u'componentName': u'LIVY2_SERVER', u'serviceName': u'SPARK2', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [7]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/hdfs_client.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/HDFS/package', u'script_type': u'PYTHON'}, u'componentName': u'HDFS_CLIENT', u'serviceName': u'HDFS', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [2, 7, 9]}, {u'commandParams': {u'command_timeout': u'600', u'script': u'scripts/hst_agent.py', u'service_package_folder': u'stacks/HDP/3.0/services/SMARTSENSE/package', u'script_type': u'PYTHON'}, u'componentName': u'HST_AGENT', u'serviceName': u'SMARTSENSE', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [1, 2, 3, 4, 5, 6, 7, 8, 9]}, {u'commandParams': {u'command_timeout': u'900', u'script': u'scripts/hive_server.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/HIVE/package', u'script_type': u'PYTHON'}, u'componentName': u'HIVE_SERVER', u'serviceName': u'HIVE', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'[]'}, u'hostIds': [6]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/metrics_collector.py', u'service_package_folder': u'stacks/HDP/3.0/services/AMBARI_METRICS/package', u'script_type': u'PYTHON'}, u'componentName': u'METRICS_COLLECTOR', u'serviceName': u'AMBARI_METRICS', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [7]}, {u'commandParams': {u'command_timeout': u'10000', u'script': u'scripts/master.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/ZEPPELIN/package', u'script_type': u'PYTHON'}, u'componentName': u'ZEPPELIN_MASTER', u'serviceName': u'ZEPPELIN', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [7]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/metrics_grafana.py', u'service_package_folder': u'stacks/HDP/3.0/services/AMBARI_METRICS/package', u'script_type': u'PYTHON'}, u'componentName': u'METRICS_GRAFANA', u'serviceName': u'AMBARI_METRICS', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [7]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/nfsgateway.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/HDFS/package', u'script_type': u'PYTHON'}, u'componentName': u'NFS_GATEWAY', u'serviceName': u'HDFS', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [2]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/datanode.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/HDFS/package', u'script_type': u'PYTHON'}, u'componentName': u'DATANODE', u'serviceName': u'HDFS', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [1, 3, 4, 5, 8]}, {u'commandParams': {u'command_timeout': u'1200', u'script': u'scripts/historyserver.py', u'version': u'3.0.0.0-1634', u'service_package_folder': u'stacks/HDP/3.0/services/YARN/package', u'script_type': u'PYTHON'}, u'componentName': u'HISTORYSERVER', u'serviceName': u'MAPREDUCE2', u'componentLevelParams': {u'unlimited_key_jce_required': u'false', u'clientsToUpdateConfigs': u'["*"]'}, u'hostIds': [9]}]}}} INFO 2019-04-30 09:48:26,984 ClusterCache.py:125 - Rewriting cache ClusterTopologyCache for cluster 2 INFO 2019-04-30 09:48:26,990 security.py:135 - Event to server at /agents/metadata (correlation_id=2): {'hash': '6aad52566e42bf25748ff667441c6ca9b4d563605de27683821e872de37b53dd1e872979c1b245b67f4dbfd14e79984ca8c952780ed53fa5a1f0701415a8c239'} INFO 2019-04-30 09:48:26,995 __init__.py:57 - Event from server at /user/ (correlation_id=2): {u'eventType': u'CREATE', u'hash': u'ff48b7097e339bc815f6b1db05164d098dd7acef7e7188be94d929922a0c37a3af2ee76ba549dda30233f44920f35caa03e7fb5401609dde15d95db20ab075fc', u'clusters': {u'2': {u'clusterLevelParams': {u'cluster_name': u'{clustername}', u'not_managed_hdfs_path_list': u'["/mr-history/done","/warehouse/tablespace/managed/hive","/warehouse/tablespace/external/hive","/app-logs","/tmp"]', u'hooks_folder': u'stack-hooks', u'dfs_type': u'HDFS', u'group_list': u'["livy","spark","hdfs","zeppelin","hadoop","users","knox"]', u'user_groups': u'{"yarn-ats":["hadoop"],"hive":["hadoop"],"zookeeper":["hadoop"],"ams":["hadoop"],"tez":["hadoop","users"],"zeppelin":["zeppelin","hadoop"],"livy":["livy","hadoop"],"spark":["spark","hadoop"],"ambari-qa":["hadoop","users"],"hdfs":["hdfs","hadoop"],"yarn":["hadoop"],"mapred":["hadoop"],"knox":["hadoop","knox"]}', u'stack_version': u'3.0', u'stack_name': u'HDP', u'user_list': u'["hive","yarn-ats","zookeeper","ams","tez","zeppelin","livy","spark","ambari-qa","hdfs","yarn","mapred","knox"]'}, u'serviceLevelParams': {u'HDFS': {u'configuration_credentials': {}, u'credentialStoreEnabled': False, u'status_commands_timeout': 300, u'version': u'3.1.0', u'service_package_folder': u'stacks/HDP/3.0/services/HDFS/package'}, u'SPARK2': {u'configuration_credentials': {}, u'credentialStoreEnabled': False, u'status_commands_timeout': 300, u'version': u'2.3.0', u'service_package_folder': u'stacks/HDP/3.0/services/SPARK2/package'}, u'AMBARI_METRICS': {u'configuration_credentials': {}, u'credentialStoreEnabled': False, u'status_commands_timeout': 600, u'version': u'0.1.0', u'service_package_folder': u'stacks/HDP/3.0/services/AMBARI_METRICS/package'}, u'ZOOKEEPER': {u'configuration_credentials': {}, u'credentialStoreEnabled': False, u'status_commands_timeout': 300, u'version': u'3.4.9.3.0', u'service_package_folder': u'stacks/HDP/3.0/services/ZOOKEEPER/package'}, u'HIVE': {u'configuration_credentials': {u'hive-site': {u'javax.jdo.option.ConnectionPassword': u'javax.jdo.option.ConnectionPassword'}}, u'credentialStoreEnabled': True, u'status_commands_timeout': 300, u'version': u'3.0.0.3.0', u'service_package_folder': u'stacks/HDP/3.0/services/HIVE/package'}, u'TEZ': {u'configuration_credentials': {}, u'credentialStoreEnabled': False, u'status_commands_timeout': 300, u'version': u'0.9.0.3.0', u'service_package_folder': u'stacks/HDP/3.0/services/TEZ/package'}, u'MAPREDUCE2': {u'configuration_credentials': {}, u'credentialStoreEnabled': False, u'status_commands_timeout': 300, u'version': u'3.0.0.3.0', u'service_package_folder': u'stacks/HDP/3.0/services/YARN/package'}, u'YARN': {u'configuration_credentials': {}, u'credentialStoreEnabled': False, u'status_commands_timeout': 300, u'version': u'3.1.0', u'service_package_folder': u'stacks/HDP/3.0/services/YARN/package'}, u'KNOX': {u'configuration_credentials': {}, u'credentialStoreEnabled': False, u'status_commands_timeout': 300, u'version': u'0.5.0.3.0', u'service_package_folder': u'stacks/HDP/3.0/services/KNOX/package'}, u'SMARTSENSE': {u'configuration_credentials': {}, u'credentialStoreEnabled': False, u'status_commands_timeout': 300, u'version': u'1.5.0.2.7.0.0-897', u'service_package_folder': u'stacks/HDP/3.0/services/SMARTSENSE/package'}, u'ZEPPELIN': {u'configuration_credentials': {}, u'credentialStoreEnabled': False, u'status_commands_timeout': 300, u'version': u'0.8.0', u'service_package_folder': u'stacks/HDP/3.0/services/ZEPPELIN/package'}}, u'fullServiceLevelMetadata': True, u'status_commands_to_run': [u'STATUS']}, u'-1': {u'clusterLevelParams': {u'jdk_location': u'https://{ambari-host}:8080/resources', u'agent_stack_retry_count': u'5', u'db_driver_filename': u'mysql-connector-java.jar', u'agent_stack_retry_on_unavailability': u'false', u'ambari_db_rca_url': u'jdbc:postgresql://{ambari-host}/ambarirca', u'jce_name': u'jce_policy-8.zip', u'java_version': u'8', u'ambari_db_rca_password': u'mapred', u'custom_mysql_jdbc_name': u'mysql-connector-java.jar', u'ambari_server_port': u'8080', u'host_sys_prepped': u'false', u'db_name': u'ambari', u'oracle_jdbc_url': u'https://{ambari-host}:8080/resources/ojdbc6.jar', u'ambari_db_rca_username': u'mapred', u'ambari_db_rca_driver': u'org.postgresql.Driver', u'ambari_server_use_ssl': u'true', u'ambari_server_host': u'{ambari-host}', u'jdk_name': u'jdk-8u112-linux-x64.tar.gz', u'java_home': u'/usr/jdk64/jdk1.8.0_112', u'gpl_license_accepted': u'true', u'mysql_jdbc_url': u'https://{ambari-host}:8080/resources/mysql-connector-java.jar'}, u'agentConfigs': {u'agentConfig': {u'agent.auto.cache.update': u'true', u'agent.check.remote.mounts': u'false', u'agent.check.mounts.timeout': u'0', u'java.home': u'/usr/jdk64/jdk1.8.0_112'}}, u'fullServiceLevelMetadata': False}}} INFO 2019-04-30 09:48:26,999 ClusterCache.py:125 - Rewriting cache ClusterMetadataCache for cluster 2 INFO 2019-04-30 09:48:26,999 ClusterCache.py:125 - Rewriting cache ClusterMetadataCache for cluster -1 INFO 2019-04-30 09:48:27,001 AmbariConfig.py:370 - Updating config property (agent.auto.cache.update) with value (true) INFO 2019-04-30 09:48:27,002 AmbariConfig.py:370 - Updating config property (agent.check.remote.mounts) with value (false) INFO 2019-04-30 09:48:27,002 AmbariConfig.py:370 - Updating config property (agent.check.mounts.timeout) with value (0) INFO 2019-04-30 09:48:27,002 AmbariConfig.py:370 - Updating config property (java.home) with value (/usr/jdk64/jdk1.8.0_112) INFO 2019-04-30 09:48:27,002 security.py:135 - Event to server at /agents/configs (correlation_id=3): {'hash': '66ba723978dd5bb4eda852a97efde0316994c6e9b6f4c85b040fb4bed4bd41256ae7221065e8019daad2959b9f081e41a05419adb18d4aa3573d8e0ed4a3686f'} INFO 2019-04-30 09:48:27,103 __init__.py:57 - Event from server at /user/ (correlation_id=3): {u'timestamp': 1556543883317, u'hash': u'beec2647f85938f2531e55ac31e43406a98cc3176842b5b0117bbc1598aef3011082d4182d74e07b5b1199d8d925774f5d06c63fb1f9be218779826ed8c092ff', u'clusters': {u'2': {u'configurationAttributes': {u'ranger-knox-plugin-properties': {}, u'ranger-hdfs-audit': {}, u'ranger-hdfs-policymgr-ssl': {}, u'ranger-knox-audit': {}, u'ams-grafana-env': {}, u'ranger-hive-policymgr-ssl': {}, u'llap-cli-log4j2': {}, u'ranger-hive-security': {}, u'spark2-metrics-properties': {}, u'ams-hbase-security-site': {}, u'hdfs-site': {u'final': {u'dfs.datanode.failed.volumes.tolerated': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true'}}, u'activity-conf': {}, u'ams-env': {}, u'knox-env': {}, u'zookeeper-log4j': {}, u'hadoop-metrics2.properties': {}, u'hdfs-log4j': {}, u'ranger-yarn-audit': {}, u'admin-topology': {}, u'gateway-site': {}, u'zeppelin-site': {}, u'ranger-hdfs-plugin-properties': {}, u'activity-env': {}, u'hst-log4j': {}, u'spark2-thrift-fairscheduler': {}, u'topology': {}, u'viewfs-mount-table': {}, u'yarn-hbase-env': {}, u'hadoop-env': {}, u'tez-interactive-site': {u'final': {u'tez.runtime.shuffle.ssl.enable': u'true'}}, u'ranger-knox-security': {}, u'parquet-logging': {}, u'yarn-hbase-log4j': {}, u'spark2-hive-site-override': {}, u'activity-zeppelin-interpreter': {}, u'users-ldif': {}, u'gateway-log4j': {}, u'activity-zeppelin-site': {}, u'spark2-defaults': {}, u'hive-log4j2': {}, u'zeppelin-log4j-properties': {}, u'activity-zeppelin-shiro': {}, u'ams-ssl-server': {}, u'tez-site': {}, u'anonymization-rules': {}, u'hiveserver2-site': {}, u'ranger-hive-plugin-properties': {}, u'activity-log4j': {}, u'core-site': {u'text': {u'hadoop.proxyuser.zeppelin.hosts': u'true', u'hadoop.proxyuser.zeppelin.groups': u'true'}, u'final': {u'fs.defaultFS': u'true'}}, u'yarn-hbase-site': {}, u'knoxsso-topology': {}, u'hiveserver2-interactive-site': {}, u'capacity-scheduler': {}, u'zoo.cfg': {}, u'ams-log4j': {}, u'hive-exec-log4j2': {}, u'zookeeper-env': {}, u'ams-hbase-log4j': {}, u'cluster-env': {}, u'mapred-site': {}, u'ranger-yarn-plugin-properties': {}, u'ams-hbase-site': {u'final': {u'hbase.zookeeper.quorum': u'true'}}, u'ssl-client': {}, u'hivemetastore-site': {}, u'product-info': {}, u'ams-site': {}, u'ams-hbase-policy': {}, u'hadoop-policy': {}, u'spark2-env': {}, u'spark2-thrift-sparkconf': {u'final': {u'spark.eventLog.dir': u'true', u'spark.eventLog.enabled': u'true', u'spark.history.fs.logDirectory': u'true'}}, u'resource-types': {}, u'mapred-env': {}, u'ldap-log4j': {}, u'container-executor': {}, u'hive-env': {}, u'spark2-log4j-properties': {}, u'ranger-yarn-policymgr-ssl': {}, u'yarn-site': {u'hidden': {u'hadoop.registry.dns.bind-port': u'true'}}, u'livy2-spark-blacklist': {}, u'ranger-knox-policymgr-ssl': {}, u'yarn-hbase-policy': {}, u'ranger-hdfs-security': {}, u'livy2-log4j-properties': {}, u'hive-interactive-env': {}, u'ranger-hive-audit': {}, u'zeppelin-env': {}, u'ams-ssl-client': {}, u'livy2-conf': {}, u'hst-agent-conf': {}, u'ams-hbase-env': {}, u'hive-atlas-application.properties': {}, u'zeppelin-shiro-ini': {}, u'ams-grafana-ini': {}, u'livy2-env': {}, u'hive-site': {u'hidden': {u'javax.jdo.option.ConnectionPassword': u'HIVE_CLIENT,CONFIG_DOWNLOAD'}}, u'tez-env': {}, u'hive-interactive-site': {}, u'yarn-env': {}, u'beeline-log4j2': {}, u'ranger-yarn-security': {}, u'ssl-server': {}, u'hst-server-conf': {}, u'llap-daemon-log4j': {}, u'yarn-log4j': {}, u'activity-zeppelin-env': {}}, u'configurations': {u'ranger-knox-plugin-properties': {}, u'ranger-hdfs-audit': {}, u'ranger-hdfs-policymgr-ssl': {}, u'ranger-knox-audit': {}, u'ams-grafana-env': {u'metrics_grafana_username': u'admin', u'metrics_grafana_pid_dir': u'/var/run/ambari-metrics-grafana', u'metrics_grafana_data_dir': u'/var/lib/ambari-metrics-grafana', u'content': u'\n# Set environment variables here.\n\n# AMS UI Server Home Dir\nexport AMS_GRAFANA_HOME_DIR={{ams_grafana_home_dir}}\n\n# AMS UI Server Data Dir\nexport AMS_GRAFANA_DATA_DIR={{ams_grafana_data_dir}}\n\n# AMS UI Server Log Dir\nexport AMS_GRAFANA_LOG_DIR={{ams_grafana_log_dir}}\n\n# AMS UI Server PID Dir\nexport AMS_GRAFANA_PID_DIR={{ams_grafana_pid_dir}}', u'metrics_grafana_password': u'h1tfmM8L6WNeBVSi8uL9', u'metrics_grafana_log_dir': u'/var/log/ambari-metrics-grafana'}, u'ranger-hive-policymgr-ssl': {}, u'llap-cli-log4j2': {u'content': u'\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# "License"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nstatus = WARN\nname = LlapCliLog4j2\npackages = org.apache.hadoop.hive.ql.log\n\n# list of properties\nproperty.hive.log.level = WARN\nproperty.hive.root.logger = console\nproperty.hive.log.dir = ${sys:java.io.tmpdir}/${sys:user.name}\nproperty.hive.log.file = llap-cli.log\nproperty.hive.llapstatus.consolelogger.level = INFO\n\n# list of all appenders\nappenders = console, DRFA, llapstatusconsole\n\n# console appender\nappender.console.type = Console\nappender.console.name = console\nappender.console.target = SYSTEM_ERR\nappender.console.layout.type = PatternLayout\nappender.console.layout.pattern = %p %c{2}: %m%n\n\n# llapstatusconsole appender\nappender.llapstatusconsole.type = Console\nappender.llapstatusconsole.name = llapstatusconsole\nappender.llapstatusconsole.target = SYSTEM_OUT\nappender.llapstatusconsole.layout.type = PatternLayout\nappender.llapstatusconsole.layout.pattern = %m%n\n\n# daily rolling file appender\nappender.DRFA.type = RollingRandomAccessFile\nappender.DRFA.name = DRFA\nappender.DRFA.fileName = ${sys:hive.log.dir}/${sys:hive.log.file}\n# Use %pid in the filePattern to append process-id@host-name to the filename if you want separate log files for different CLI session\nappender.DRFA.filePattern = ${sys:hive.log.dir}/${sys:hive.log.file}.%d{yyyy-MM-dd}_%i\nappender.DRFA.layout.type = PatternLayout\nappender.DRFA.layout.pattern = %d{ISO8601} %5p [%t] %c{2}: %m%n\nappender.DRFA.policies.type = Policies\nappender.DRFA.policies.time.type = TimeBasedTriggeringPolicy\nappender.DRFA.policies.time.interval = 1\nappender.DRFA.policies.time.modulate = true\nappender.DRFA.strategy.type = DefaultRolloverStrategy\nappender.DRFA.strategy.max = {{llap_cli_log_maxbackupindex}}\nappender.DRFA.policies.fsize.type = SizeBasedTriggeringPolicy\nappender.DRFA.policies.fsize.size = {{llap_cli_log_maxfilesize}}MB\n\n# list of all loggers\nloggers = ZooKeeper, DataNucleus, Datastore, JPOX, HadoopConf, LlapStatusServiceDriverConsole\n\nlogger.ZooKeeper.name = org.apache.zookeeper\nlogger.ZooKeeper.level = WARN\n\nlogger.DataNucleus.name = DataNucleus\nlogger.DataNucleus.level = ERROR\n\nlogger.Datastore.name = Datastore\nlogger.Datastore.level = ERROR\n\nlogger.JPOX.name = JPOX\nlogger.JPOX.level = ERROR\n\nlogger.HadoopConf.name = org.apache.hadoop.conf.Configuration\nlogger.HadoopConf.level = ERROR\n\nlogger.LlapStatusServiceDriverConsole.name = LlapStatusServiceDriverConsole\nlogger.LlapStatusServiceDriverConsole.additivity = false\nlogger.LlapStatusServiceDriverConsole.level = ${sys:hive.llapstatus.consolelogger.level}\n\n\n# root logger\nrootLogger.level = ${sys:hive.log.level}\nrootLogger.appenderRefs = root, DRFA\nrootLogger.appenderRef.root.ref = ${sys:hive.root.logger}\nrootLogger.appenderRef.DRFA.ref = DRFA\nlogger.LlapStatusServiceDriverConsole.appenderRefs = llapstatusconsole, DRFA\nlogger.LlapStatusServiceDriverConsole.appenderRef.llapstatusconsole.ref = llapstatusconsole\nlogger.LlapStatusServiceDriverConsole.appenderRef.DRFA.ref = DRFA', u'llap_cli_log_maxbackupindex': u'30', u'llap_cli_log_maxfilesize': u'256'}, u'ranger-hive-security': {}, u'spark2-metrics-properties': {u'content': u'\n# syntax: [instance].sink|source.[name].[options]=[value]\n\n# This file configures Spark\'s internal metrics system. The metrics system is\n# divided into instances which correspond to internal components.\n# Each instance can be configured to report its metrics to one or more sinks.\n# Accepted values for [instance] are "master", "worker", "executor", "driver",\n# and "applications". A wild card "*" can be used as an instance name, in\n# which case all instances will inherit the supplied property.\n#\n# Within an instance, a "source" specifies a particular set of grouped metrics.\n# there are two kinds of sources:\n# 1. Spark internal sources, like MasterSource, WorkerSource, etc, which will\n# collect a Spark component\'s internal state. Each instance is paired with a\n# Spark source that is added automatically.\n# 2. Common sources, like JvmSource, which will collect low level state.\n# These can be added through configuration options and are then loaded\n# using reflection.\n#\n# A "sink" specifies where metrics are delivered to. Each instance can be\n# assigned one or more sinks.\n#\n# The sink|source field specifies whether the property relates to a sink or\n# source.\n#\n# The [name] field specifies the name of source or sink.\n#\n# The [options] field is the specific property of this source or sink. The\n# source or sink is responsible for parsing this property.\n#\n# Notes:\n# 1. To add a new sink, set the "class" option to a fully qualified class\n# name (see examples below).\n# 2. Some sinks involve a polling period. The minimum allowed polling period\n# is 1 second.\n# 3. Wild card properties can be overridden by more specific properties.\n# For example, master.sink.console.period takes precedence over\n# *.sink.console.period.\n# 4. A metrics specific configuration\n# "spark.metrics.conf=${SPARK_HOME}/conf/metrics.properties" should be\n# added to Java properties using -Dspark.metrics.conf=xxx if you want to\n# customize metrics system. You can also put the file in ${SPARK_HOME}/conf\n# and it will be loaded automatically.\n# 5. MetricsServlet is added by default as a sink in master, worker and client\n# driver, you can send http request "/metrics/json" to get a snapshot of all the\n# registered metrics in json format. For master, requests "/metrics/master/json" and\n# "/metrics/applications/json" can be sent seperately to get metrics snapshot of\n# instance master and applications. MetricsServlet may not be configured by self.\n#\n\n## List of available sinks and their properties.\n\n# org.apache.spark.metrics.sink.ConsoleSink\n# Name: Default: Description:\n# period 10 Poll period\n# unit seconds Units of poll period\n\n# org.apache.spark.metrics.sink.CSVSink\n# Name: Default: Description:\n# period 10 Poll period\n# unit seconds Units of poll period\n# directory /tmp Where to store CSV files\n\n# org.apache.spark.metrics.sink.GangliaSink\n# Name: Default: Description:\n# host NONE Hostname or multicast group of Ganglia server\n# port NONE Port of Ganglia server(s)\n# period 10 Poll period\n# unit seconds Units of poll period\n# ttl 1 TTL of messages sent by Ganglia\n# mode multicast Ganglia network mode (\'unicast\' or \'multicast\')\n\n# org.apache.spark.metrics.sink.JmxSink\n\n# org.apache.spark.metrics.sink.MetricsServlet\n# Name: Default: Description:\n# path VARIES* Path prefix from the web server root\n# sample false Whether to show entire set of samples for histograms (\'false\' or \'true\')\n#\n# * Default path is /metrics/json for all instances except the master. The master has two paths:\n# /metrics/aplications/json # App information\n# /metrics/master/json # Master information\n\n# org.apache.spark.metrics.sink.GraphiteSink\n# Name: Default: Description:\n# host NONE Hostname of Graphite server\n# port NONE Port of Graphite server\n# period 10 Poll period\n# unit seconds Units of poll period\n# prefix EMPTY STRING Prefix to prepend to metric name\n\n## Examples\n# Enable JmxSink for all instances by class name\n#*.sink.jmx.class=org.apache.spark.metrics.sink.JmxSink\n\n# Enable ConsoleSink for all instances by class name\n#*.sink.console.class=org.apache.spark.metrics.sink.ConsoleSink\n\n# Polling period for ConsoleSink\n#*.sink.console.period=10\n\n#*.sink.console.unit=seconds\n\n# Master instance overlap polling period\n#master.sink.console.period=15\n\n#master.sink.console.unit=seconds\n\n# Enable CsvSink for all instances\n#*.sink.csv.class=org.apache.spark.metrics.sink.CsvSink\n\n# Polling period for CsvSink\n#*.sink.csv.period=1\n\n#*.sink.csv.unit=minutes\n\n# Polling directory for CsvSink\n#*.sink.csv.directory=/tmp/\n\n# Worker instance overlap polling period\n#worker.sink.csv.period=10\n\n#worker.sink.csv.unit=minutes\n\n# Enable jvm source for instance master, worker, driver and executor\n#master.source.jvm.class=org.apache.spark.metrics.source.JvmSource\n\n#worker.source.jvm.class=org.apache.spark.metrics.source.JvmSource\n\n#driver.source.jvm.class=org.apache.spark.metrics.source.JvmSource\n\n#executor.source.jvm.class=org.apache.spark.metrics.source.JvmSource'}, u'ams-hbase-security-site': {u'ams.zookeeper.principal': u'', u'hadoop.security.authentication': u'', u'hbase.security.authorization': u'', u'hbase.master.kerberos.principal': u'', u'hbase.regionserver.keytab.file': u'', u'hbase.zookeeper.property.kerberos.removeHostFromPrincipal': u'', u'hbase.regionserver.kerberos.principal': u'', u'hbase.coprocessor.region.classes': u'', u'ams.zookeeper.keytab': u'', u'hbase.zookeeper.property.kerberos.removeRealmFromPrincipal': u'', u'hbase.master.keytab.file': u'', u'hbase.security.authentication': u'', u'hbase.coprocessor.master.classes': u'', u'hbase.myclient.principal': u'', u'hbase.myclient.keytab': u'', u'hbase.zookeeper.property.jaasLoginRenew': u'', u'hbase.zookeeper.property.authProvider.1': u''}, u'hdfs-site': {u'dfs.namenode.checkpoint.period': u'21600', u'dfs.namenode.avoid.write.stale.datanode': u'true', u'dfs.namenode.startup.delay.block.deletion.sec': u'3600', u'dfs.namenode.checkpoint.txns': u'1000000', u'dfs.content-summary.limit': u'5000', u'dfs.datanode.data.dir': u'/hadoop/hdfs/data', u'dfs.cluster.administrators': u' hdfs', u'dfs.namenode.audit.log.async': u'true', u'dfs.datanode.balance.bandwidthPerSec': u'6250000', u'dfs.namenode.safemode.threshold-pct': u'1', u'dfs.namenode.checkpoint.edits.dir': u'${dfs.namenode.checkpoint.dir}', u'dfs.namenode.rpc-address': u'{host}:8020', u'dfs.permissions.enabled': u'true', u'dfs.client.read.shortcircuit': u'true', u'dfs.https.port': u'50470', u'dfs.namenode.https-address': u'{host}:50470', u'nfs.file.dump.dir': u'/tmp/.hdfs-nfs', u'dfs.namenode.fslock.fair': u'false', u'dfs.blockreport.initialDelay': u'120', u'dfs.journalnode.edits.dir': u'/hadoop/hdfs/journalnode', u'dfs.blocksize': u'134217728', u'dfs.datanode.max.transfer.threads': u'4096', u'hadoop.caller.context.enabled': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.namenode.handler.count': u'200', u'dfs.namenode.checkpoint.dir': u'/hadoop/hdfs/namesecondary', u'fs.permissions.umask-mode': u'022', u'dfs.datanode.http.address': u'0.0.0.0:50075', u'dfs.datanode.ipc.address': u'0.0.0.0:8010', u'dfs.encrypt.data.transfer.cipher.suites': u'AES/CTR/NoPadding', u'dfs.namenode.acls.enabled': u'true', u'dfs.client.read.shortcircuit.streams.cache.size': u'4096', u'dfs.datanode.address': u'0.0.0.0:50010', u'manage.include.files': u'false', u'dfs.replication': u'3', u'dfs.datanode.failed.volumes.tolerated': u'0', u'dfs.namenode.accesstime.precision': u'0', u'dfs.datanode.https.address': u'0.0.0.0:50475', u'dfs.namenode.avoid.read.stale.datanode': u'true', u'dfs.namenode.secondary.http-address': u'{host}:50090', u'nfs.exports.allowed.hosts': u'* rw', u'dfs.datanode.du.reserved': u'6707609600', u'dfs.namenode.stale.datanode.interval': u'30000', u'dfs.heartbeat.interval': u'3', u'dfs.namenode.http-address': u'{host}:50070', u'dfs.http.policy': u'HTTP_ONLY', u'dfs.block.access.token.enable': u'true', u'dfs.client.retry.policy.enabled': u'false', u'dfs.permissions.superusergroup': u'hdfs', u'dfs.journalnode.https-address': u'0.0.0.0:8481', u'dfs.journalnode.http-address': u'0.0.0.0:8480', u'dfs.domain.socket.path': u'/var/lib/hadoop-hdfs/dn_socket', u'dfs.namenode.write.stale.datanode.ratio': u'1.0f', u'dfs.hosts.exclude': u'/etc/hadoop/conf/dfs.exclude', u'dfs.datanode.data.dir.perm': u'750', u'dfs.namenode.name.dir.restore': u'true', u'dfs.replication.max': u'50', u'dfs.namenode.name.dir': u'/hadoop/hdfs/namenode'}, u'activity-conf': {u'tez_job.activity.watcher.enabled': u'true', u'mr_job.activity.watcher.enabled': u'true', u'global.activity.processor.pool.max.wait.seconds': u'60', u'hdfs.activity.watcher.enabled': u'true', u'global.activity.analyzer.user': u'activity_analyzer', u'phoenix.sink.flush.interval.seconds': u'3600', u'mr_job.max.job.size.mb.for.parallel.execution': u'500', u'global.activity.processing.parallelism': u'8', u'activity.explorer.user': u'activity_explorer', u'tez_job.tmp.dir': u'/var/lib/smartsense/activity-analyzer/tez/tmp/', u'phoenix.sink.batch.size': u'5000', u'yarn_app.activity.watcher.enabled': u'true'}, u'ams-env': {u'ambari_metrics_user': u'ams', u'min_ambari_metrics_hadoop_sink_version': u'2.7.0.0', u'ams_classpath_additional': u'', u'metrics_monitor_log_dir': u'/var/log/ambari-metrics-monitor', u'timeline.metrics.skip.virtual.interfaces': u'false', u'metrics_collector_log_dir': u'/var/log/ambari-metrics-collector', u'timeline.metrics.skip.network.interfaces.patterns': u'None', u'metrics_monitor_pid_dir': u'/var/run/ambari-metrics-monitor', u'failover_strategy_blacklisted_interval': u'300', u'content': u'\n# Set environment variables here.\n\n# AMS instance name\nexport AMS_INSTANCE_NAME={{hostname}}\n\n# The java implementation to use. Java 1.6 required.\nexport JAVA_HOME={{java64_home}}\n\n# Collector Log directory for log4j\nexport AMS_COLLECTOR_LOG_DIR={{ams_collector_log_dir}}\n\n# Monitor Log directory for outfile\nexport AMS_MONITOR_LOG_DIR={{ams_monitor_log_dir}}\n\n# Collector pid directory\nexport AMS_COLLECTOR_PID_DIR={{ams_collector_pid_dir}}\n\n# Monitor pid directory\nexport AMS_MONITOR_PID_DIR={{ams_monitor_pid_dir}}\n\n# AMS HBase pid directory\nexport AMS_HBASE_PID_DIR={{hbase_pid_dir}}\n\n# AMS Collector heapsize\nexport AMS_COLLECTOR_HEAPSIZE={{metrics_collector_heapsize}}\n\n# HBase Tables Initialization check enabled\nexport AMS_HBASE_INIT_CHECK_ENABLED={{ams_hbase_init_check_enabled}}\n\n# AMS Collector options\nexport AMS_COLLECTOR_OPTS="-Djava.library.path=/usr/lib/ams-hbase/lib/hadoop-native"\n{% if security_enabled %}\nexport AMS_COLLECTOR_OPTS="$AMS_COLLECTOR_OPTS -Djava.security.auth.login.config={{ams_collector_jaas_config_file}}"\n{% endif %}\n\n# AMS Collector GC options\nexport AMS_COLLECTOR_GC_OPTS="-XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:{{ams_collector_log_dir}}/collector-gc.log-`date +\'%Y%m%d%H%M\'`"\nexport AMS_COLLECTOR_OPTS="$AMS_COLLECTOR_OPTS $AMS_COLLECTOR_GC_OPTS"\n\n# Metrics collector host will be blacklisted for specified number of seconds if metric monitor failed to connect to it.\nexport AMS_FAILOVER_STRATEGY_BLACKLISTED_INTERVAL={{failover_strategy_blacklisted_interval}}\n\n# Extra Java CLASSPATH elements for Metrics Collector. Optional.\nexport COLLECTOR_ADDITIONAL_CLASSPATH={{ams_classpath_additional}}', u'timeline.metrics.host.inmemory.aggregation.jvm.arguments': u'-Xmx256m -Xms128m -XX:PermSize=68m', u'metrics_collector_pid_dir': u'/var/run/ambari-metrics-collector', u'timeline.metrics.skip.disk.metrics.patterns': u'true', u'metrics_collector_heapsize': u'512'}, u'knox-env': {u'knox_master_secret': u'eXtvivsSzCJzjlz3kZXV', u'knox_pid_dir': u'/var/run/knox', u'knox_keytab_path': u'', u'knox_group': u'knox', u'knox_user': u'knox', u'knox_principal_name': u''}, u'zookeeper-log4j': {u'content': u'\n#\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# "License"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n#\n#\n\n#\n# ZooKeeper Logging Configuration\n#\n\n# DEFAULT: console appender only\nlog4j.rootLogger=INFO, CONSOLE, ROLLINGFILE\n\n# Example with rolling log file\n#log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE\n\n# Example with rolling log file and tracing\n#log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE\n\n#\n# Log INFO level and above messages to the console\n#\nlog4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender\nlog4j.appender.CONSOLE.Threshold=INFO\nlog4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout\nlog4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n\n\n#\n# Add ROLLINGFILE to rootLogger to get log file output\n# Log DEBUG level and above messages to a log file\nlog4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender\nlog4j.appender.ROLLINGFILE.Threshold=DEBUG\nlog4j.appender.ROLLINGFILE.File={{zk_log_dir}}/zookeeper.log\n\n# Max log file size of 10MB\nlog4j.appender.ROLLINGFILE.MaxFileSize={{zookeeper_log_max_backup_size}}MB\n# uncomment the next line to limit number of backup files\n#log4j.appender.ROLLINGFILE.MaxBackupIndex={{zookeeper_log_number_of_backup_files}}\n\nlog4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout\nlog4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n\n\n\n#\n# Add TRACEFILE to rootLogger to get log file output\n# Log DEBUG level and above messages to a log file\nlog4j.appender.TRACEFILE=org.apache.log4j.FileAppender\nlog4j.appender.TRACEFILE.Threshold=TRACE\nlog4j.appender.TRACEFILE.File=zookeeper_trace.log\n\nlog4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout\n### Notice we are including log4j\'s NDC here (%x)\nlog4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n', u'zookeeper_log_max_backup_size': u'10', u'zookeeper_log_number_of_backup_files': u'10'}, u'hadoop-metrics2.properties': {u'content': u'\n{% if has_ganglia_server %}\n*.period=60\n\n*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31\n*.sink.ganglia.period=10\n\n# default for supportsparse is false\n*.sink.ganglia.supportsparse=true\n\n.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both\n.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40\n\n# Hook up to the server\nnamenode.sink.ganglia.servers={{ganglia_server_host}}:8661\ndatanode.sink.ganglia.servers={{ganglia_server_host}}:8659\njobtracker.sink.ganglia.servers={{ganglia_server_host}}:8662\ntasktracker.sink.ganglia.servers={{ganglia_server_host}}:8658\nmaptask.sink.ganglia.servers={{ganglia_server_host}}:8660\nreducetask.sink.ganglia.servers={{ganglia_server_host}}:8660\nresourcemanager.sink.ganglia.servers={{ganglia_server_host}}:8664\nnodemanager.sink.ganglia.servers={{ganglia_server_host}}:8657\nhistoryserver.sink.ganglia.servers={{ganglia_server_host}}:8666\njournalnode.sink.ganglia.servers={{ganglia_server_host}}:8654\nnimbus.sink.ganglia.servers={{ganglia_server_host}}:8649\nsupervisor.sink.ganglia.servers={{ganglia_server_host}}:8650\n\nresourcemanager.sink.ganglia.tagsForPrefix.yarn=Queue\n\n{% endif %}\n\n{% if has_metric_collector %}\n\n*.period={{metrics_collection_period}}\n*.sink.timeline.plugin.urls=file:///usr/lib/ambari-metrics-hadoop-sink/ambari-metrics-hadoop-sink.jar\n*.sink.timeline.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink\n*.sink.timeline.period={{metrics_collection_period}}\n*.sink.timeline.sendInterval={{metrics_report_interval}}000\n*.sink.timeline.slave.host.name={{hostname}}\n*.sink.timeline.zookeeper.quorum={{zookeeper_quorum}}\n*.sink.timeline.protocol={{metric_collector_protocol}}\n*.sink.timeline.port={{metric_collector_port}}\n*.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}\n*.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}\n{% if is_aggregation_https_enabled %}\n*.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}\n{% endif %}\n\n# HTTPS properties\n*.sink.timeline.truststore.path = {{metric_truststore_path}}\n*.sink.timeline.truststore.type = {{metric_truststore_type}}\n*.sink.timeline.truststore.password = {{metric_truststore_password}}\n\ndatanode.sink.timeline.collector.hosts={{ams_collector_hosts}}\nnamenode.sink.timeline.collector.hosts={{ams_collector_hosts}}\nresourcemanager.sink.timeline.collector.hosts={{ams_collector_hosts}}\nnodemanager.sink.timeline.collector.hosts={{ams_collector_hosts}}\njobhistoryserver.sink.timeline.collector.hosts={{ams_collector_hosts}}\njournalnode.sink.timeline.collector.hosts={{ams_collector_hosts}}\nmaptask.sink.timeline.collector.hosts={{ams_collector_hosts}}\nreducetask.sink.timeline.collector.hosts={{ams_collector_hosts}}\napplicationhistoryserver.sink.timeline.collector.hosts={{ams_collector_hosts}}\n\nresourcemanager.sink.timeline.tagsForPrefix.yarn=Queue\n\n{% if is_nn_client_port_configured %}\n# Namenode rpc ports customization\nnamenode.sink.timeline.metric.rpc.client.port={{nn_rpc_client_port}}\n{% endif %}\n{% if is_nn_dn_port_configured %}\nnamenode.sink.timeline.metric.rpc.datanode.port={{nn_rpc_dn_port}}\n{% endif %}\n{% if is_nn_healthcheck_port_configured %}\nnamenode.sink.timeline.metric.rpc.healthcheck.port={{nn_rpc_healthcheck_port}}\n{% endif %}\n\n{% endif %}'}, u'hdfs-log4j': {u'content': u'\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# "License"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n\n\n# Define some default values that can be overridden by system properties\n# To change daemon root logger use hadoop_root_logger in hadoop-env\nhadoop.root.logger=INFO,console\nhadoop.log.dir=.\nhadoop.log.file=hadoop.log\n\n\n# Define the root logger to the system property "hadoop.root.logger".\nlog4j.rootLogger=${hadoop.root.logger}, EventCounter\n\n# Logging Threshold\nlog4j.threshhold=ALL\n\n#\n# Daily Rolling File Appender\n#\n\nlog4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}\n\n# Rollver at midnight\nlog4j.appender.DRFA.DatePattern=.yyyy-MM-dd\n\n# 30-day backup\n#log4j.appender.DRFA.MaxBackupIndex=30\nlog4j.appender.DRFA.layout=org.apache.log4j.PatternLayout\n\n# Pattern format: Date LogLevel LoggerName LogMessage\nlog4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n\n# Debugging Pattern format\n#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n\n\n\n#\n# console\n# Add "console" to rootlogger above if you want to use this\n#\n\nlog4j.appender.console=org.apache.log4j.ConsoleAppender\nlog4j.appender.console.target=System.err\nlog4j.appender.console.layout=org.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n\n\n#\n# TaskLog Appender\n#\n\n#Default values\nhadoop.tasklog.taskid=null\nhadoop.tasklog.iscleanup=false\nhadoop.tasklog.noKeepSplits=4\nhadoop.tasklog.totalLogFileSize=100\nhadoop.tasklog.purgeLogSplits=true\nhadoop.tasklog.logsRetainHours=12\n\nlog4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender\nlog4j.appender.TLA.taskId=${hadoop.tasklog.taskid}\nlog4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}\nlog4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}\n\nlog4j.appender.TLA.layout=org.apache.log4j.PatternLayout\nlog4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n\n\n#\n#Security audit appender\n#\nhadoop.security.logger=INFO,console\nhadoop.security.log.maxfilesize={{hadoop_security_log_max_backup_size}}MB\nhadoop.security.log.maxbackupindex={{hadoop_security_log_number_of_backup_files}}\nlog4j.category.SecurityLogger=${hadoop.security.logger}\nhadoop.security.log.file=SecurityAuth.audit\nlog4j.additivity.SecurityLogger=false\nlog4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}\nlog4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout\nlog4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n\nlog4j.appender.DRFAS.DatePattern=.yyyy-MM-dd\n\nlog4j.appender.RFAS=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}\nlog4j.appender.RFAS.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n\nlog4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize}\nlog4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex}\n\n#\n# hdfs audit logging\n#\nhdfs.audit.logger=INFO,console\nlog4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}\nlog4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false\nlog4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log\nlog4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout\nlog4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n\nlog4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd\n\n#\n# NameNode metrics logging.\n# The default is to retain two namenode-metrics.log files up to 64MB each.\n#\nnamenode.metrics.logger=INFO,NullAppender\nlog4j.logger.NameNodeMetricsLog=${namenode.metrics.logger}\nlog4j.additivity.NameNodeMetricsLog=false\nlog4j.appender.NNMETRICSRFA=org.apache.log4j.RollingFileAppender\nlog4j.appender.NNMETRICSRFA.File=${hadoop.log.dir}/namenode-metrics.log\nlog4j.appender.NNMETRICSRFA.layout=org.apache.log4j.PatternLayout\nlog4j.appender.NNMETRICSRFA.layout.ConversionPattern=%d{ISO8601} %m%n\nlog4j.appender.NNMETRICSRFA.MaxBackupIndex=1\nlog4j.appender.NNMETRICSRFA.MaxFileSize=64MB\n\n#\n# mapred audit logging\n#\nmapred.audit.logger=INFO,console\nlog4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger}\nlog4j.additivity.org.apache.hadoop.mapred.AuditLogger=false\nlog4j.appender.MRAUDIT=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log\nlog4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout\nlog4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n\nlog4j.appender.MRAUDIT.DatePattern=.yyyy-MM-dd\n\n#\n# Rolling File Appender\n#\n\nlog4j.appender.RFA=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}\n\n# Logfile size and and 30-day backups\nlog4j.appender.RFA.MaxFileSize={{hadoop_log_max_backup_size}}MB\nlog4j.appender.RFA.MaxBackupIndex={{hadoop_log_number_of_backup_files}}\n\nlog4j.appender.RFA.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} - %m%n\nlog4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n\n\n\n# Custom Logging levels\n\nhadoop.metrics.log.level=INFO\n#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG\n#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG\n#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG\nlog4j.logger.org.apache.hadoop.metrics2=${hadoop.metrics.log.level}\n\n# Jets3t library\nlog4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR\n\n#\n# Null Appender\n# Trap security logger on the hadoop client side\n#\nlog4j.appender.NullAppender=org.apache.log4j.varia.NullAppender\n\n#\n# Event Counter Appender\n# Sends counts of logging messages at different severity levels to Hadoop Metrics.\n#\nlog4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter\n\n# Removes "deprecated" messages\nlog4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN\n\n#\n# HDFS block state change log from block manager\n#\n# Uncomment the following to suppress normal block state change\n# messages from BlockManager in NameNode.\n#log4j.logger.BlockStateChange=WARN\n\n# Adding logging for 3rd party library\nlog4j.logger.org.apache.commons.beanutils=WARN', u'hadoop_security_log_max_backup_size': u'256', u'hadoop_log_max_backup_size': u'256', u'hadoop_log_number_of_backup_files': u'10', u'hadoop_security_log_number_of_backup_files': u'20'}, u'ranger-yarn-audit': {}, u'admin-topology': {u'content': u'\n <topology>\n\n <gateway>\n\n <provider>\n <role>authentication</role>\n <name>ShiroProvider</name>\n <enabled>true</enabled>\n <param>\n <name>sessionTimeout</name>\n <value>30</value>\n </param>\n <param>\n <name>main.ldapRealm</name>\n <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>\n </param>\n <param>\n <name>main.ldapRealm.userDnTemplate</name>\n <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>\n </param>\n <param>\n <name>main.ldapRealm.contextFactory.url</name>\n <value>ldap://{{knox_host_name}}:33389</value>\n </param>\n <param>\n <name>main.ldapRealm.contextFactory.authenticationMechanism</name>\n <value>simple</value>\n </param>\n <param>\n <name>urls./**</name>\n <value>authcBasic</value>\n </param>\n </provider>\n\n <provider>\n <role>authorization</role>\n <name>AclsAuthz</name>\n <enabled>true</enabled>\n <param>\n\t <name>knox.acl.mode</name>\n\t <value>OR</value>\n </param>\n <param>\n <name>knox.acl</name>\n <value>KNOX_ADMIN_USERS;KNOX_ADMIN_GROUPS;*</value>\n </param>\n </provider>\n\n <provider>\n <role>identity-assertion</role>\n <name>HadoopGroupProvider</name>\n <enabled>true</enabled>\n <param>\n <name>CENTRAL_GROUP_CONFIG_PREFIX</name>\n <value>gateway.group.config.</value>\n </param>\n </provider>\n\n </gateway>\n\n <service>\n <role>KNOX</role>\n </service>\n\n </topology>'}, u'gateway-site': {u'java.security.auth.login.config': u'/etc/knox/conf/krb5JAASLogin.conf', u'gateway.dispatch.whitelist': u'DEFAULT', u'gateway.knox.admin.groups': u'admin', u'gateway.dispatch.whitelist.services': u'DATANODE,HBASEUI,HDFSUI,JOBHISTORYUI,NODEUI,RESOURCEMANAGER,WEBHBASE,WEBHDFS,YARNUI', u'gateway.gateway.conf.dir': u'deployments', u'gateway.path': u'gateway', u'gateway.hadoop.kerberos.secured': u'false', u'sun.security.krb5.debug': u'false', u'gateway.port': u'8443', u'gateway.websocket.feature.enabled': u'{{websocket_support}}', u'gateway.read.only.override.topologies': u'admin,knoxsso,default', u'gateway.knox.admin.users': u'admin', u'java.security.krb5.conf': u'/etc/knox/conf/krb5.conf'}, u'zeppelin-site': {u'zeppelin.server.port': u'9995', u'zeppelin.ssl.truststore.password': u'change me', u'zeppelin.interpreters': u'org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.spark.SparkSqlInterpreter,org.apache.zeppelin.spark.DepInterpreter,org.apache.zeppelin.markdown.Markdown,org.apache.zeppelin.angular.AngularInterpreter,org.apache.zeppelin.shell.ShellInterpreter,org.apache.zeppelin.jdbc.JDBCInterpreter,org.apache.zeppelin.phoenix.PhoenixInterpreter,org.apache.zeppelin.livy.LivySparkInterpreter,org.apache.zeppelin.livy.LivyPySparkInterpreter,org.apache.zeppelin.livy.LivySparkRInterpreter,org.apache.zeppelin.livy.LivySparkSQLInterpreter', u'zeppelin.ssl.truststore.path': u'conf/truststore', u'zeppelin.notebook.dir': u'notebook', u'zeppelin.ssl.keystore.password': u'test', u'zeppelin.ssl.keystore.path': u'keystore.p12', u'zeppelin.notebook.public': u'false', u'zeppelin.server.addr': u'0.0.0.0', u'zeppelin.interpreter.config.upgrade': u'true', u'zeppelin.ssl.client.auth': u'false', u'zeppelin.notebook.homescreen': u' ', u'zeppelin.interpreter.dir': u'interpreter', u'zeppelin.ssl.keystore.type': u'PKCS12', u'zeppelin.notebook.s3.user': u'user', u'zeppelin.ssl.key.manager.password': u'change me', u'zeppelin.anonymous.allowed': u'false', u'zeppelin.ssl.truststore.type': u'JKS', u'zeppelin.config.fs.dir': u'conf', u'zeppelin.config.storage.class': u'org.apache.zeppelin.storage.FileSystemConfigStorage', u'zeppelin.ssl': u'false', u'zeppelin.notebook.storage': u'org.apache.zeppelin.notebook.repo.FileSystemNotebookRepo', u'zeppelin.notebook.homescreen.hide': u'false', u'zeppelin.websocket.max.text.message.size': u'1024000', u'zeppelin.interpreter.connect.timeout': u'30000', u'zeppelin.notebook.s3.bucket': u'zeppelin', u'zeppelin.server.ssl.port': u'9995', u'zeppelin.interpreter.group.order': u'spark,angular,jdbc,livy,md,sh', u'zeppelin.server.allowed.origins': u'*'}, u'ranger-hdfs-plugin-properties': {}, u'activity-env': {u'activity-env-content': u'#!/bin/bash\n\n# Copyright (c) 2011-2018, Hortonworks Inc. All rights reserved.\n# Except as expressly permitted in a written agreement between you\n# or your company and Hortonworks, Inc, any use, reproduction,\n# modification, redistribution, sharing, lending or other exploitation\n# of all or any part of the contents of this file is strictly prohibited.\n\n# Enable verbose shell execution\n#set -xv\n\n## Set HOME for various components\nexport HADOOP_HOME=/usr/hdp/current/hadoop-client\nexport HDFS_HOME=/usr/hdp/current/hadoop-hdfs-client\nexport MAPREDUCE_HOME=/usr/hdp/current/hadoop-mapreduce-client\nexport YARN_HOME=/usr/hdp/current/hadoop-yarn-client\nexport HIVE_HOME=/usr/hdp/current/hive-client\nexport HCAT_HOME=/usr/hdp/current/hive-webhcat\nexport TEZ_HOME=/usr/hdp/current/tez-client\nexport HBASE_HOME=/usr/hdp/current/hbase-client\nexport PHOENIX_HOME=/usr/hdp/current/phoenix-client\nexport ACTIVITY_ANALYZER_HOME=/usr/hdp/share/hst/activity-analyzer\nexport AMS_COLLECTOR_HOME=/usr/lib/ambari-metrics-collector\nexport JAVA_HOME={{java_home}}\n\n## Set conf dir for various components\nexport HADOOP_CONF_DIR=/etc/hadoop/conf/\nexport HIVE_CONF_DIR=/etc/hive/conf/\nexport HBASE_CONF_DIR=/etc/hbase/conf/\nexport TEZ_CONF_DIR=/etc/tez/conf/\nexport ACTIVITY_ANALYZER_CONF_DIR=/etc/smartsense-activity/conf/\nexport AMS_HBASE_CONF=/etc/ams-hbase/conf\n\nexport DEBUG_ENABLED=false\n\n## Set JVM related configs\nexport ANALYZER_JAVA_OPTS="{{analyzer_jvm_opts}} -Xmx{{analyzer_jvm_heap}}m"', u'analyzer_jvm_heap': u'8192', u'analyzer_jvm_opts': u'-Xms128m'}, u'hst-log4j': {u'hst_log_dir': u'/var/log/hst', u'hst-log4j-content': u'\n# Copyright (c) 2011-2018, Hortonworks Inc. All rights reserved.\n# Except as expressly permitted in a written agreement between you\n# or your company and Hortonworks, Inc, any use, reproduction,\n# modification, redistribution, sharing, lending or other exploitation\n# of all or any part of the contents of this file is strictly prohibited.\n\n# Define some default values that can be overridden by system properties\n# Root logger option\nlog4j.rootLogger=INFO,file\n\nlog4j.appender.file=org.apache.log4j.RollingFileAppender\nlog4j.appender.file.File={{hst_log_dir}}/${log.file.name}\nlog4j.appender.file.MaxFileSize={{hst_max_file_size}}MB\nlog4j.appender.file.MaxBackupIndex={{hst_max_backup_index}}\nlog4j.appender.file.layout=org.apache.log4j.PatternLayout\nlog4j.appender.file.layout.ConversionPattern=%d{ISO8601} %5p [%t] %c{1}:%L - %m%n\n\nlog4j.appender.analytics=org.apache.log4j.RollingFileAppender\nlog4j.appender.analytics.File={{hst_log_dir}}/analytics.log\nlog4j.appender.analytics.MaxFileSize={{hst_max_file_size}}MB\nlog4j.appender.analytics.MaxBackupIndex={{hst_max_backup_index}}\nlog4j.appender.analytics.layout=org.apache.log4j.PatternLayout\nlog4j.appender.analytics.layout.ConversionPattern=%m%n\n\n# HST logger\nlog4j.logger.com.hortonworks=INFO\ncom.github.oxo42.stateless4j=WARN\nlog4j.logger.com.sun.jersey=WARN\nlog4j.logger.org.eclipse.jetty.server=INFO\n\n# Analytics logger\nlog4j.logger.analytics=INFO,analytics\nlog4j.additivity.analytics=false', u'hst_max_file_size': u'30', u'hst_max_backup_index': u'10'}, u'spark2-thrift-fairscheduler': {u'fairscheduler_content': u'<?xml version="1.0"?>\n <allocations>\n <pool name="default">\n <schedulingMode>FAIR</schedulingMode>\n <weight>1</weight>\n <minShare>2</minShare>\n </pool>\n </allocations>'}, u'topology': {u'content': u'\n <topology>\n\n <gateway>\n\n <provider>\n <role>authentication</role>\n <name>ShiroProvider</name>\n <enabled>true</enabled>\n <param>\n <name>sessionTimeout</name>\n <value>30</value>\n </param>\n <param>\n <name>main.ldapRealm</name>\n <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>\n </param>\n <param>\n <name>main.ldapRealm.userDnTemplate</name>\n <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>\n </param>\n <param>\n <name>main.ldapRealm.contextFactory.url</name>\n <value>ldap://{{knox_host_name}}:33389</value>\n </param>\n <param>\n <name>main.ldapRealm.contextFactory.authenticationMechanism</name>\n <value>simple</value>\n </param>\n <param>\n <name>urls./**</name>\n <value>authcBasic</value>\n </param>\n </provider>\n\n <provider>\n <role>identity-assertion</role>\n <name>Default</name>\n <enabled>true</enabled>\n </provider>\n\n <provider>\n <role>authorization</role>\n <name>AclsAuthz</name>\n <enabled>true</enabled>\n </provider>\n\n </gateway>\n\n <service>\n <role>NAMENODE</role>\n <url>{{namenode_address}}</url>\n </service>\n\n <service>\n <role>JOBTRACKER</role>\n <url>rpc://{{rm_host}}:{{jt_rpc_port}}</url>\n </service>\n\n <service>\n <role>WEBHDFS</role>\n {{webhdfs_service_urls}}\n </service>\n\n <service>\n <role>WEBHCAT</role>\n <url>http://{{webhcat_server_host}}:{{templeton_port}}/templeton</url>\n </service>\n\n <service>\n <role>OOZIE</role>\n <url>http://{{oozie_server_host}}:{{oozie_server_port}}/oozie</url>\n </service>\n\n <service>\n <role>OOZIEUI</role>\n <url>http://{{oozie_server_host}}:{{oozie_server_port}}/oozie/</url>\n </service>\n\n\n <service>\n <role>WEBHBASE</role>\n <url>http://{{hbase_master_host}}:{{hbase_master_port}}</url>\n </service>\n\n <service>\n <role>HIVE</role>\n <url>http://{{hive_server_host}}:{{hive_http_port}}/{{hive_http_path}}</url>\n </service>\n\n <service>\n <role>RESOURCEMANAGER</role>\n <url>http://{{rm_host}}:{{rm_port}}/ws</url>\n </service>\n\n <service>\n <role>DRUID-COORDINATOR-UI</role>\n {{druid_coordinator_urls}}\n </service>\n\n <service>\n <role>DRUID-COORDINATOR</role>\n {{druid_coordinator_urls}}\n </service>\n\n <service>\n <role>DRUID-OVERLORD-UI</role>\n {{druid_overlord_urls}}\n </service>\n\n <service>\n <role>DRUID-OVERLORD</role>\n {{druid_overlord_urls}}\n </service>\n\n <service>\n <role>DRUID-ROUTER</role>\n {{druid_router_urls}}\n </service>\n\n <service>\n <role>DRUID-BROKER</role>\n {{druid_broker_urls}}\n </service>\n\n <service>\n <role>ZEPPELINUI</role>\n {{zeppelin_ui_urls}}\n </service>\n\n <service>\n <role>ZEPPELINWS</role>\n {{zeppelin_ws_urls}}\n </service>\n\n </topology>'}, u'viewfs-mount-table': {u'content': u' '}, u'yarn-hbase-env': {u'yarn_hbase_system_service_queue_name': u'default', u'yarn_hbase_client_cpu': u'1', u'yarn_hbase_regionserver_cpu': u'1', u'hbase_java_io_tmpdir': u'/tmp', u'yarn_hbase_master_memory': u'4096', u'yarn_hbase_master_cpu': u'1', u'is_hbase_system_service_launch': u'true', u'yarn_hbase_client_containers': u'1', u'content': u'\n # Set environment variables here.\n\n # The java implementation to use. Java 1.6 required.\n export JAVA_HOME={{java64_home}}\n\n # HBase Configuration directory\n export HBASE_CONF_DIR=${HBASE_CONF_DIR:-{{yarn_hbase_conf_dir}}}\n\n # Extra Java CLASSPATH elements. Optional.\n export HBASE_CLASSPATH=${HBASE_CLASSPATH}\n\n\n # The maximum amount of heap to use. Default is left to JVM default.\n # export HBASE_HEAPSIZE=4G\n\n # Extra Java runtime options.\n # Below are what we set by default. May only work with SUN JVM.\n # For more on why as well as other possible settings,\n # see http://wiki.apache.org/hadoop/PerformanceTuning\n export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:{{yarn_hbase_log_dir}}/gc.log-`date +\'%Y%m%d%H%M\'`"\n # Uncomment below to enable java garbage collection logging.\n # export HBASE_OPTS="$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-hbase.log"\n\n # Uncomment and adjust to enable JMX exporting\n # See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.\n # More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html\n #\n # export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"\n # If you want to configure BucketCache, specify \'-XX: MaxDirectMemorySize=\' with proper direct memory size\n # export HBASE_THRIFT_OPTS="$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"\n # export HBASE_ZOOKEEPER_OPTS="$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"\n\n # File naming hosts on which HRegionServers will run. $HBASE_HOME/conf/regionservers by default.\n export HBASE_REGIONSERVERS=${HBASE_CONF_DIR}/regionservers\n\n # Extra ssh options. Empty by default.\n # export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"\n\n # Where log files are stored. $HBASE_HOME/logs by default.\n export HBASE_LOG_DIR=${HBASE_LOG_DIR:-{{yarn_hbase_log_dir}}}\n\n # A string representing this instance of hbase. $USER by default.\n # export HBASE_IDENT_STRING=$USER\n\n # The scheduling priority for daemon processes. See \'man nice\'.\n # export HBASE_NICENESS=10\n\n # The directory where pid files are stored. /tmp by default.\n export HBASE_PID_DIR=${HBASE_PID_DIR:-{{yarn_hbase_pid_dir}}}\n\n # Seconds to sleep between slave commands. Unset by default. This\n # can be useful in large clusters, where, e.g., slave rsyncs can\n # otherwise arrive faster than the master can service them.\n # export HBASE_SLAVE_SLEEP=0.1\n\n # Tell HBase whether it should manage it\'s own instance of Zookeeper or not.\n export HBASE_MANAGES_ZK=false\n\n {% if java_version < 8 %}\n JDK_DEPENDED_OPTS="-XX:PermSize=128m -XX:MaxPermSize=128m"\n {% endif %}\n\n export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC -XX:ErrorFile=$HBASE_LOG_DIR/hs_err_pid%p.log -Djava.io.tmpdir={{yarn_hbase_java_io_tmpdir}}"\n export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xmx{{yarn_hbase_master_heapsize}} $JDK_DEPENDED_OPTS"\n export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:CMSInitiatingOccupancyFraction=70 -XX:ReservedCodeCacheSize=256m -Xms{{yarn_hbase_regionserver_heapsize}} -Xmx{{yarn_hbase_regionserver_heapsize}} $JDK_DEPENDED_OPTS"\n\n {% if security_enabled %}\n export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Djava.security.auth.login.config={{yarn_hbase_master_jaas_file}}"\n export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Djava.security.auth.login.config={{yarn_hbase_regionserver_jaas_file}}"\n {% endif %}', u'yarn_hbase_master_containers': u'1', u'yarn_hbase_regionserver_memory': u'4096', u'yarn_hbase_regionserver_containers': u'1', u'yarn_hbase_client_memory': u'1536', u'yarn_hbase_heap_memory_factor': u'0.8', u'yarn_hbase_pid_dir_prefix': u'/var/run/hadoop-yarn-hbase', u'yarn_hbase_system_service_launch_mode': u'sync'}, u'hadoop-env': {u'proxyuser_group': u'users', u'hdfs_user_nproc_limit': u'65536', u'namenode_opt_permsize': u'128m', u'hdfs_tmp_dir': u'/tmp', u'namenode_heapsize': u'1024m', u'hdfs_user_keytab': u'', u'content': u'\n # Set Hadoop-specific environment variables here.\n\n # The only required environment variable is JAVA_HOME. All others are\n # optional. When running a distributed configuration it is best to\n # set JAVA_HOME in this file, so that it is correctly defined on\n # remote nodes.\n\n # The java implementation to use. Required.\n export JAVA_HOME={{java_home}}\n export HADOOP_HOME_WARN_SUPPRESS=1\n\n # Hadoop home directory\n export HADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}\n\n # Hadoop Configuration Directory\n\n {# this is different for HDP1 #}\n # Path to jsvc required by secure HDP 2.0 datanode\n export JSVC_HOME={{jsvc_path}}\n\n\n # The maximum amount of heap to use, in MB. Default is 1000.\n export HADOOP_HEAPSIZE="{{hadoop_heapsize}}"\n\n export HADOOP_NAMENODE_INIT_HEAPSIZE="-Xms{{namenode_heapsize}}"\n\n # Extra Java runtime options. Empty by default.\n export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true ${HADOOP_OPTS}"\n\n USER="$(whoami)"\n\n # Command specific options appended to HADOOP_OPTS when specified\n HADOOP_JOBTRACKER_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{jtnode_opt_newsize}} -XX:MaxNewSize={{jtnode_opt_maxnewsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +\'%Y%m%d%H%M\'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xmx{{jtnode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dmapred.audit.logger=INFO,MRAUDIT -Dhadoop.mapreduce.jobsummary.logger=INFO,JSA ${HADOOP_JOBTRACKER_OPTS}"\n\n HADOOP_TASKTRACKER_OPTS="-server -Xmx{{ttnode_heapsize}} -Dhadoop.security.logger=ERROR,console -Dmapred.audit.logger=ERROR,console ${HADOOP_TASKTRACKER_OPTS}"\n\n {% if java_version < 8 %}\n SHARED_HDFS_NAMENODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{namenode_opt_newsize}} -XX:MaxNewSize={{namenode_opt_maxnewsize}} -XX:PermSize={{namenode_opt_permsize}} -XX:MaxPermSize={{namenode_opt_maxpermsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +\'%Y%m%d%H%M\'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms{{namenode_heapsize}} -Xmx{{namenode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT"\n export HDFS_NAMENODE_OPTS="${SHARED_HDFS_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 ${HDFS_NAMENODE_OPTS}"\n export HDFS_DATANODE_OPTS="-server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-datanode/bin/kill-data-node\\" -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +\'%Y%m%d%H%M\'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{dtnode_heapsize}} -Xmx{{dtnode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HDFS_DATANODE_OPTS} -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly"\n\n export HDFS_SECONDARYNAMENODE_OPTS="${SHARED_HDFS_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node\\" ${HDFS_SECONDARYNAMENODE_OPTS}"\n\n # The following applies to multiple commands (fs, dfs, fsck, distcp etc)\n export HADOOP_CLIENT_OPTS="-Xmx${HADOOP_HEAPSIZE}m -XX:MaxPermSize=512m $HADOOP_CLIENT_OPTS"\n\n {% else %}\n SHARED_HDFS_NAMENODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{namenode_opt_newsize}} -XX:MaxNewSize={{namenode_opt_maxnewsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +\'%Y%m%d%H%M\'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms{{namenode_heapsize}} -Xmx{{namenode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT"\n export HDFS_NAMENODE_OPTS="${SHARED_HDFS_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 ${HDFS_NAMENODE_OPTS}"\n export HDFS_DATANODE_OPTS="-server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-datanode/bin/kill-data-node\\" -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +\'%Y%m%d%H%M\'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{dtnode_heapsize}} -Xmx{{dtnode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HDFS_DATANODE_OPTS} -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly"\n\n export HDFS_SECONDARYNAMENODE_OPTS="${SHARED_HDFS_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node\\" ${HDFS_SECONDARYNAMENODE_OPTS}"\n\n # The following applies to multiple commands (fs, dfs, fsck, distcp etc)\n export HADOOP_CLIENT_OPTS="-Xmx${HADOOP_HEAPSIZE}m $HADOOP_CLIENT_OPTS"\n {% endif %}\n\n {% if security_enabled %}\n export HDFS_NAMENODE_OPTS="$HDFS_NAMENODE_OPTS -Djava.security.auth.login.config={{hadoop_conf_dir}}/hdfs_nn_jaas.conf -Djavax.security.auth.useSubjectCredsOnly=false"\n export HDFS_SECONDARYNAMENODE_OPTS="$HDFS_SECONDARYNAMENODE_OPTS -Djava.security.auth.login.config={{hadoop_conf_dir}}/hdfs_nn_jaas.conf -Djavax.security.auth.useSubjectCredsOnly=false"\n export HDFS_DATANODE_OPTS="$HDFS_DATANODE_OPTS -Djava.security.auth.login.config={{hadoop_conf_dir}}/hdfs_dn_jaas.conf -Djavax.security.auth.useSubjectCredsOnly=false"\n export HADOOP_JOURNALNODE_OPTS="$HADOOP_JOURNALNODE_OPTS -Djava.security.auth.login.config={{hadoop_conf_dir}}/hdfs_jn_jaas.conf -Djavax.security.auth.useSubjectCredsOnly=false"\n {% endif %}\n\n HDFS_NFS3_OPTS="-Xmx{{nfsgateway_heapsize}}m -Dhadoop.security.logger=ERROR,DRFAS ${HDFS_NFS3_OPTS}"\n HADOOP_BALANCER_OPTS="-server -Xmx{{hadoop_heapsize}}m ${HADOOP_BALANCER_OPTS}"\n\n\n # On secure datanodes, user to run the datanode as after dropping privileges\n export HDFS_DATANODE_SECURE_USER=${HDFS_DATANODE_SECURE_USER:-{{hadoop_secure_dn_user}}}\n\n # Extra ssh options. Empty by default.\n export HADOOP_SSH_OPTS="-o ConnectTimeout=5 -o SendEnv=HADOOP_CONF_DIR"\n\n # Where log files are stored. $HADOOP_HOME/logs by default.\n export HADOOP_LOG_DIR={{hdfs_log_dir_prefix}}/$USER\n\n # Where log files are stored in the secure data environment.\n export HADOOP_SECURE_LOG_DIR=${HADOOP_SECURE_LOG_DIR:-{{hdfs_log_dir_prefix}}/$HDFS_DATANODE_SECURE_USER}\n\n # File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.\n # export HADOOP_WORKERS=${HADOOP_HOME}/conf/slaves\n\n # host:path where hadoop code should be rsync\'d from. Unset by default.\n # export HADOOP_MASTER=master:/home/$USER/src/hadoop\n\n # Seconds to sleep between slave commands. Unset by default. This\n # can be useful in large clusters, where, e.g., slave rsyncs can\n # otherwise arrive faster than the master can service them.\n # export HADOOP_WORKER_SLEEP=0.1\n\n # The directory where pid files are stored. /tmp by default.\n export HADOOP_PID_DIR={{hadoop_pid_dir_prefix}}/$USER\n export HADOOP_SECURE_PID_DIR=${HADOOP_SECURE_PID_DIR:-{{hadoop_pid_dir_prefix}}/$HDFS_DATANODE_SECURE_USER}\n\n YARN_RESOURCEMANAGER_OPTS="-Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY"\n\n # A string representing this instance of hadoop. $USER by default.\n export HADOOP_IDENT_STRING=$USER\n\n # The scheduling priority for daemon processes. See \'man nice\'.\n\n # export HADOOP_NICENESS=10\n\n # Add database libraries\n JAVA_JDBC_LIBS=""\n if [ -d "/usr/share/java" ]; then\n for jarFile in `ls /usr/share/java | grep -E "(mysql|ojdbc|postgresql|sqljdbc)" 2>/dev/null`\n do\n JAVA_JDBC_LIBS=${JAVA_JDBC_LIBS}:$jarFile\n done\n fi\n\n # Add libraries to the hadoop classpath - some may not need a colon as they already include it\n export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}${JAVA_JDBC_LIBS}\n\n # Setting path to hdfs command line\n export HADOOP_LIBEXEC_DIR={{hadoop_libexec_dir}}\n\n # Mostly required for hadoop 2.0\n export JAVA_LIBRARY_PATH=${JAVA_LIBRARY_PATH}:{{hadoop_lib_home}}/native/Linux-{{architecture}}-64\n\n export HADOOP_OPTS="-Dhdp.version=$HDP_VERSION $HADOOP_OPTS"\n\n\n # Fix temporary bug, when ulimit from conf files is not picked up, without full relogin.\n # Makes sense to fix only when runing DN as root\n if [ "$command" == "datanode" ] && [ "$EUID" -eq 0 ] && [ -n "$HDFS_DATANODE_SECURE_USER" ]; then\n {% if is_datanode_max_locked_memory_set %}\n ulimit -l {{datanode_max_locked_memory}}\n {% endif %}\n ulimit -n {{hdfs_user_nofile_limit}}\n fi\n # Enable ACLs on zookeper znodes if required\n {% if hadoop_zkfc_opts is defined %}\n export HDFS_ZKFC_OPTS="{{hadoop_zkfc_opts}} $HDFS_ZKFC_OPTS"\n {% endif %}', u'hdfs_user_nofile_limit': u'128000', u'keyserver_port': u'', u'hadoop_root_logger': u'INFO,RFA', u'namenode_opt_maxnewsize': u'128m', u'hdfs_log_dir_prefix': u'/var/log/hadoop', u'keyserver_host': u' ', u'nfsgateway_heapsize': u'1024', u'dtnode_heapsize': u'1024m', u'namenode_opt_maxpermsize': u'256m', u'hdfs_user': u'hdfs', u'namenode_opt_newsize': u'128m', u'namenode_backup_dir': u'/tmp/upgrades', u'hadoop_heapsize': u'1024', u'hadoop_pid_dir_prefix': u'/var/run/hadoop', u'hdfs_principal_name': u''}, u'tez-interactive-site': {u'tez.dag.recovery.enabled': u'false', u'tez.runtime.io.sort.mb': u'1092', u'tez.runtime.shuffle.fetch.buffer.percent': u'0.6', u'tez.history.logging.log.level': u'TASK_ATTEMPT', u'tez.history.logging.timeline.num-dags-per-group': u'5', u'tez.runtime.unordered.output.buffer.size-mb': u'245', u'tez.runtime.shuffle.read.timeout': u'30000', u'tez.lib.uris': u'/hdp/apps/${hdp.version}/tez/tez.tar.gz', u'tez.grouping.node.local.only': u'true', u'tez.container.max.java.heap.fraction': u'-1', u'tez.am.client.heartbeat.poll.interval.millis': u'6000', u'tez.am.client.heartbeat.timeout.secs': u'90', u'tez.history.logging.taskattempt-filters': u'SERVICE_BUSY,EXTERNAL_PREEMPTION', u'tez.runtime.pipelined-shuffle.enabled': u'false', u'tez.runtime.shuffle.keep-alive.enabled': u'true', u'tez.am.node-blacklisting.enabled': u'false', u'tez.am.task.reschedule.higher.priority': u'false', u'tez.task.heartbeat.timeout.check-ms': u'15000', u'tez.runtime.shuffle.ssl.enable': u'false', u'tez.runtime.shuffle.fetch.verify-disk-checksum': u'false', u'tez.session.am.dag.submit.timeout.secs': u'1209600', u'tez.runtime.enable.final-merge.in.output': u'false', u'tez.am.am-rm.heartbeat.interval-ms.max': u'10000', u'tez.runtime.pipelined.sorter.lazy-allocate.memory': u'true', u'tez.runtime.shuffle.memory.limit.percent': u'0.25', u'tez.runtime.shuffle.connect.timeout': u'30000', u'tez.runtime.report.partition.stats': u'true', u'tez.am.task.listener.thread-count': u'1', u'tez.am.resource.memory.mb': u'4096', u'tez.runtime.shuffle.parallel.copies': u'8', u'tez.task.timeout-ms': u'90000'}, u'ranger-knox-security': {}, u'parquet-logging': {u'content': u'\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# "License"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Properties file which configures the operation of the JDK\n# logging facility.\n\n# The system will look for this config file, first using\n# a System property specified at startup:\n#\n# >java -Djava.util.logging.config.file=myLoggingConfigFilePath\n#\n# If this property is not specified, then the config file is\n# retrieved from its default location at:\n#\n# JDK_HOME/jre/lib/logging.properties\n\n# Global logging properties.\n# ------------------------------------------\n# The set of handlers to be loaded upon startup.\n# Comma-separated list of class names.\n# (? LogManager docs say no comma here, but JDK example has comma.)\n# handlers=java.util.logging.ConsoleHandler\norg.apache.parquet.handlers= java.util.logging.FileHandler\n\n# Default global logging level.\n# Loggers and Handlers may override this level\n.level=INFO\n\n# Handlers\n# -----------------------------------------\n\n# --- ConsoleHandler ---\n# Override of global logging level\njava.util.logging.ConsoleHandler.level=INFO\njava.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter\njava.util.logging.SimpleFormatter.format=[%1$tc] %4$s: %2$s - %5$s %6$s%n\n\n# --- FileHandler ---\n# Override of global logging level\njava.util.logging.FileHandler.level=ALL\n\n# Naming style for the output file:\n# (The output file is placed in the system temporary directory.\n# %u is used to provide unique identifier for the file.\n# For more information refer\n# https://docs.oracle.com/javase/7/docs/api/java/util/logging/FileHandler.html)\njava.util.logging.Fil... Limiting size of output file in bytes:\njava.util.logging.FileHandler.limit=50000000\n\n# Number of output files to cycle through, by appending an\n# integer to the base file name:\njava.util.logging.FileHandler.count=1\n\n# Style of output (Simple or XML):\njava.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter'}, u'yarn-hbase-log4j': {u'content': u'\n # Licensed to the Apache Software Foundation (ASF) under one\n # or more contributor license agreements. See the NOTICE file\n # distributed with this work for additional information\n # regarding copyright ownership. The ASF licenses this file\n # to you under the Apache License, Version 2.0 (the\n # "License"); you may not use this file except in compliance\n # with the License. You may obtain a copy of the License at\n #\n # http://www.apache.org/licenses/LICENSE-2.0\n #\n # Unless required by applicable law or agreed to in writing, software\n # distributed under the License is distributed on an "AS IS" BASIS,\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n\n\n # Define some default values that can be overridden by system properties\n hbase.root.logger=INFO,console\n hbase.security.logger=INFO,console\n hbase.log.dir=.\n hbase.log.file=hbase.log\n\n # Define the root logger to the system property "hbase.root.logger".\n log4j.rootLogger=${hbase.root.logger}\n\n # Logging Threshold\n log4j.threshold=ALL\n\n #\n # Daily Rolling File Appender\n #\n log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender\n log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}\n\n # Rollver at midnight\n log4j.appender.DRFA.DatePattern=.yyyy-MM-dd\n\n # 30-day backup\n #log4j.appender.DRFA.MaxBackupIndex=30\n log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout\n\n # Pattern format: Date LogLevel LoggerName LogMessage\n log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n # Rolling File Appender properties\n hbase.log.maxfilesize={{hbase_log_maxfilesize}}MB\n hbase.log.maxbackupindex={{hbase_log_maxbackupindex}}\n\n # Rolling File Appender\n log4j.appender.RFA=org.apache.log4j.RollingFileAppender\n log4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}\n\n log4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}\n log4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}\n\n log4j.appender.RFA.layout=org.apache.log4j.PatternLayout\n log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n #\n # Security audit appender\n #\n hbase.security.log.file=SecurityAuth.audit\n hbase.security.log.maxfilesize={{hbase_security_log_maxfilesize}}MB\n hbase.security.log.maxbackupindex={{hbase_security_log_maxbackupindex}}\n log4j.appender.RFAS=org.apache.log4j.RollingFileAppender\n log4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file}\n log4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize}\n log4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex}\n log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout\n log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n\n log4j.category.SecurityLogger=${hbase.security.logger}\n log4j.additivity.SecurityLogger=false\n #log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE\n\n #\n # Null Appender\n #\n log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender\n\n #\n # console\n # Add "console" to rootlogger above if you want to use this\n #\n log4j.appender.console=org.apache.log4j.ConsoleAppender\n log4j.appender.console.target=System.err\n log4j.appender.console.layout=org.apache.log4j.PatternLayout\n log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n # Custom Logging levels\n\n log4j.logger.org.apache.zookeeper=INFO\n #log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG\n log4j.logger.org.apache.hadoop.hbase=INFO\n # Make these two classes INFO-level. Make them DEBUG to see more zk debug.\n log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=INFO\n log4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=INFO\n #log4j.logger.org.apache.hadoop.dfs=DEBUG\n # Set this class to log INFO only otherwise its OTT\n # Enable this to get detailed connection error/retry logging.\n # log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=TRACE\n\n\n # Uncomment this line to enable tracing on _every_ RPC call (this can be a lot of output)\n #log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG\n\n # Uncomment the below if you want to remove logging of client region caching\'\n # and scan of .META. messages\n # log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO\n # log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO', u'hbase_log_maxbackupindex': u'20', u'hbase_log_maxfilesize': u'256', u'hbase_security_log_maxbackupindex': u'20', u'hbase_security_log_maxfilesize': u'256'}, u'spark2-hive-site-override': {u'hive.server2.thrift.port': u'10016', u'hive.server2.transport.mode': u'binary', u'hive.metastore.client.connect.retry.delay': u'5', u'hive.metastore.client.socket.timeout': u'1800', u'hive.server2.thrift.http.port': u'10002', u'metastore.catalog.default': u'spark', u'hive.server2.enable.doAs': u'false'}, u'activity-zeppelin-interpreter': {u'activity-zeppelin-interpreter-content': u'{\n "interpreterSettings": {\n "jdbc": {\n "id": "jdbc",\n "name": "jdbc",\n "group": "jdbc",\n "properties": {\n "default.url": {\n "name": "default.url",\n "value": "jdbc:postgresql://localhost:5432/",\n "type": "string"\n },\n "default.driver": {\n "name": "default.driver",\n "value": "org.postgresql.Driver",\n "type": "string"\n },\n "zeppelin.jdbc.principal": {\n "name": "zeppelin.jdbc.principal",\n "value": "",\n "type": "string"\n },\n "default.completer.ttlInSeconds": {\n "name": "default.completer.ttlInSeconds",\n "value": "120",\n "type": "number"\n },\n "default.password": {\n "name": "default.password",\n "value": "",\n "type": "password"\n },\n "default.completer.schemaFilters": {\n "name": "default.completer.schemaFilters",\n "value": "",\n "type": "textarea"\n },\n "default.splitQueries": {\n "name": "default.splitQueries",\n "value": false,\n "type": "checkbox"\n },\n "default.user": {\n "name": "default.user",\n "value": "gpadmin",\n "type": "string"\n },\n "zeppelin.jdbc.concurrent.max_connection": {\n "name": "zeppelin.jdbc.concurrent.max_connection",\n "value": "10",\n "type": "number"\n },\n "common.max_count": {\n "name": "common.max_count",\n "value": "1000",\n "type": "number"\n },\n "default.precode": {\n "name": "default.precode",\n "value": "",\n "type": "textarea"\n },\n "zeppelin.jdbc.auth.type": {\n "name": "zeppelin.jdbc.auth.type",\n "value": "",\n "type": "string"\n },\n "default.statementPrecode": {\n "name": "default.statementPrecode",\n "value": "",\n "type": "string"\n },\n "zeppelin.jdbc.concurrent.use": {\n "name": "zeppelin.jdbc.concurrent.use",\n "value": true,\n "type": "checkbox"\n },\n "zeppelin.jdbc.keytab.location": {\n "name": "zeppelin.jdbc.keytab.location",\n "value": "",\n "type": "string"\n }\n },\n "status": "READY",\n "interpreterGroup": [\n {\n "name": "sql",\n "class": "org.apache.zeppelin.jdbc.JDBCInterpreter",\n "defaultInterpreter": false,\n "editor": {\n "language": "sql",\n "editOnDblClick": false,\n "completionSupport": true\n }\n }\n ],\n "dependencies": [],\n "option": {\n "remote": true,\n "port": -1,\n "perNote": "shared",\n "perUser": "shared",\n "isExistingProcess": false,\n "setPermission": false,\n "owners": [],\n "isUserImpersonate": false\n }\n },\n "phoenix": {\n "id": "phoenix",\n "name": "phoenix",\n "group": "jdbc",\n "properties": {\n "default.url": {\n "name": "default.url",\n "value": "{{activity_explorer_jdbc_url}}",\n "type": "string"\n },\n "default.driver": {\n "name": "default.driver",\n "value": "org.apache.phoenix.jdbc.PhoenixDriver",\n "type": "string"\n },\n "zeppelin.jdbc.principal": {\n "name": "zeppelin.jdbc.principal",\n "value": "",\n "type": "string"\n },\n "default.completer.ttlInSeconds": {\n "name": "default.completer.ttlInSeconds",\n "value": "120",\n "type": "number"\n },\n "default.password": {\n "name": "default.password",\n "value": "",\n "type": "password"\n },\n "default.completer.schemaFilters": {\n "name": "default.completer.schemaFilters",\n "value": "",\n "type": "textarea"\n },\n "default.splitQueries": {\n "name": "default.splitQueries",\n "value": false,\n "type": "checkbox"\n },\n "default.user": {\n "name": "default.user",\n "value": "gpadmin",\n "type": "string"\n },\n "zeppelin.jdbc.concurrent.max_connection": {\n "name": "zeppelin.jdbc.concurrent.max_connection",\n "value": "10",\n "type": "number"\n },\n "common.max_count": {\n "name": "common.max_count",\n "value": "1000",\n "type": "number"\n },\n "default.precode": {\n "name": "default.precode",\n "value": "",\n "type": "textarea"\n },\n "zeppelin.jdbc.auth.type": {\n "name": "zeppelin.jdbc.auth.type",\n "value": "",\n "type": "string"\n },\n "default.statementPrecode": {\n "name": "default.statementPrecode",\n "value": ""\n },\n "zeppelin.jdbc.concurrent.use": {\n "name": "zeppelin.jdbc.concurrent.use",\n "value": true,\n "type": "checkbox"\n },\n "zeppelin.jdbc.keytab.location": {\n "name": "zeppelin.jdbc.keytab.location",\n "value": "",\n "type": "string"\n },\n "default.phoenix.query.numberFormat": {\n "name": "default.phoenix.query.numberFormat",\n "value": "#.#",\n "type": "string"\n }\n },\n "status": "READY",\n "interpreterGroup": [\n {\n "name": "sql",\n "class": "org.apache.zeppelin.jdbc.JDBCInterpreter",\n "defaultInterpreter": true,\n "editor": {\n "language": "sql",\n "editOnDblClick": false,\n "completionSupport": true\n }\n }\n ],\n "dependencies": [],\n "option": {\n "remote": true,\n "port": -1,\n "perNote": "shared",\n "perUser": "shared",\n "isExistingProcess": false,\n "setPermission": false,\n "owners": [],\n "isUserImpersonate": false\n }\n },\n "md": {\n "id": "md",\n "name": "md",\n "group": "md",\n "properties": {\n "markdown.parser.type": {\n "name": "markdown.parser.type",\n "value": "pegdown",\n "type": "string"\n }\n },\n "status": "READY",\n "interpreterGroup": [\n {\n "name": "md",\n "class": "org.apache.zeppelin.markdown.Markdown",\n "defaultInterpreter": false,\n "editor": {\n "language": "markdown",\n "editOnDblClick": true,\n "completionSupport": false\n }\n }\n ],\n "dependencies": [],\n "option": {\n "remote": true,\n "port": -1,\n "isExistingProcess": false,\n "setPermission": false,\n "owners": [],\n "isUserImpersonate": false\n }\n }\n },\n "interpreterBindings": {\n "2DGK3YSSY": [\n "phoenix"\n ],\n "2BQH91X36": [\n "phoenix"\n ],\n "2BNVQJUBK": [\n "phoenix"\n ],\n "2BPD7951H": [\n "phoenix"\n ],\n "2DGNFSF2D": [\n "phoenix"\n ],\n "2DGCYZ7F3": [\n "phoenix"\n ],\n "2BTCVPTMH": [\n "phoenix"\n ]\n },\n "interpreterRepositories": [\n {\n "id": "central",\n "type": "default",\n "url": "http://repo1.maven.org/maven2/",\n "releasePolicy": {\n "enabled": true,\n "updatePolicy": "daily",\n "checksumPolicy": "warn"\n },\n "snapshotPolicy": {\n "enabled": true,\n "updatePolicy": "daily",\n "checksumPolicy": "warn"\n },\n "mirroredRepositories": [],\n "repositoryManager": false\n },\n {\n "id": "HDPReleases",\n "type": "default",\n "url": "http://repo.hortonworks.com/content/groups/public/",\n "releasePolicy": {\n "enabled": true,\n "updatePolicy": "daily",\n "checksumPolicy": "warn"\n },\n "snapshotPolicy": {\n "enabled": true,\n "updatePolicy": "daily",\n "checksumPolicy": "warn"\n },\n "mirroredRepositories": [],\n "repositoryManager": false\n },\n {\n "id": "HDPDev",\n "type": "default",\n "url": "http://nexus-private.hortonworks.com/nexus/content/groups/public/",\n "releasePolicy": {\n "enabled": true,\n "updatePolicy": "daily",\n "checksumPolicy": "warn"\n },\n "snapshotPolicy": {\n "enabled": true,\n "updatePolicy": "daily",\n "checksumPolicy": "warn"\n },\n "mirroredRepositories": [],\n "repositoryManager": false\n },\n {\n "id": "local",\n "type": "default",\n "url": "file:///var/lib/smartsense/activity-explorer/.m2/repository",\n "releasePolicy": {\n "enabled": true,\n "updatePolicy": "daily",\n "checksumPolicy": "warn"\n },\n "snapshotPolicy": {\n "enabled": true,\n "updatePolicy": "daily",\n "checksumPolicy": "warn"\n },\n "mirroredRepositories": [],\n "repositoryManager": false\n }\n ]\n}'}, u'users-ldif': {u'content': u'\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# "License"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nversion: 1\n\n# Please replace with site specific values\ndn: dc=hadoop,dc=apache,dc=org\nobjectclass: organization\nobjectclass: dcObject\no: Hadoop\ndc: hadoop\n\n# Entry for a sample people container\n# Please replace with site specific values\ndn: ou=people,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass:organizationalUnit\nou: people\n\n# Entry for a sample end user\n# Please replace with site specific values\ndn: uid=guest,ou=people,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass:person\nobjectclass:organizationalPerson\nobjectclass:inetOrgPerson\ncn: Guest\nsn: User\nuid: guest\nuserPassword:guest-password\n\n# entry for sample user admin\ndn: uid=admin,ou=people,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass:person\nobjectclass:organizationalPerson\nobjectclass:inetOrgPerson\ncn: Admin\nsn: Admin\nuid: admin\nuserPassword:admin-password\n\n# entry for sample user sam\ndn: uid=sam,ou=people,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass:person\nobjectclass:organizationalPerson\nobjectclass:inetOrgPerson\ncn: sam\nsn: sam\nuid: sam\nuserPassword:sam-password\n\n# entry for sample user tom\ndn: uid=tom,ou=people,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass:person\nobjectclass:organizationalPerson\nobjectclass:inetOrgPerson\ncn: tom\nsn: tom\nuid: tom\nuserPassword:tom-password\n\n# create FIRST Level groups branch\ndn: ou=groups,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass:organizationalUnit\nou: groups\ndescription: generic groups branch\n\n# create the analyst group under groups\ndn: cn=analyst,ou=groups,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass: groupofnames\ncn: analyst\ndescription:analyst group\nmember: uid=sam,ou=people,dc=hadoop,dc=apache,dc=org\nmember: uid=tom,ou=people,dc=hadoop,dc=apache,dc=org\n\n\n# create the scientist group under groups\ndn: cn=scientist,ou=groups,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass: groupofnames\ncn: scientist\ndescription: scientist group\nmember: uid=sam,ou=people,dc=hadoop,dc=apache,dc=org\n\n# create the admin group under groups\ndn: cn=admin,ou=groups,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass: groupofnames\ncn: admin\ndescription: admin group\nmember: uid=admin,ou=people,dc=hadoop,dc=apache,dc=org'}, u'gateway-log4j': {u'content': u'\n\n # Licensed to the Apache Software Foundation (ASF) under one\n # or more contributor license agreements. See the NOTICE file\n # distributed with this work for additional information\n # regarding copyright ownership. The ASF licenses this file\n # to you under the Apache License, Version 2.0 (the\n # "License"); you may not use this file except in compliance\n # with the License. You may obtain a copy of the License at\n #\n # http://www.apache.org/licenses/LICENSE-2.0\n #\n # Unless required by applicable law or agreed to in writing, software\n # distributed under the License is distributed on an "AS IS" BASIS,\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n\n app.log.dir=${launcher.dir}/../logs\n app.log.file=${launcher.name}.log\n app.audit.file=${launcher.name}-audit.log\n\n log4j.rootLogger=ERROR, drfa\n\n log4j.logger.org.apache.knox.gateway=INFO\n #log4j.logger.org.apache.knox.gateway=DEBUG\n\n #log4j.logger.org.eclipse.jetty=DEBUG\n #log4j.logger.org.apache.shiro=DEBUG\n #log4j.logger.org.apache.http=DEBUG\n #log4j.logger.org.apache.http.client=DEBUG\n #log4j.logger.org.apache.http.headers=DEBUG\n #log4j.logger.org.apache.http.wire=DEBUG\n\n log4j.appender.stdout=org.apache.log4j.ConsoleAppender\n log4j.appender.stdout.layout=org.apache.log4j.PatternLayout\n log4j.appender.stdout.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n\n\n log4j.appender.drfa=org.apache.log4j.DailyRollingFileAppender\n log4j.appender.drfa.File=${app.log.dir}/${app.log.file}\n log4j.appender.drfa.DatePattern=.yyyy-MM-dd\n log4j.appender.drfa.layout=org.apache.log4j.PatternLayout\n log4j.appender.drfa.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n\n log4j.appender.drfa.MaxFileSize = {{knox_gateway_log_maxfilesize}}MB\n log4j.appender.drfa.MaxBackupIndex = {{knox_gateway_log_maxbackupindex}}\n\n log4j.logger.audit=INFO, auditfile\n log4j.appender.auditfile=org.apache.log4j.DailyRollingFileAppender\n log4j.appender.auditfile.File=${app.log.dir}/${app.audit.file}\n log4j.appender.auditfile.Append = true\n log4j.appender.auditfile.DatePattern = \'.\'yyyy-MM-dd\n log4j.appender.auditfile.layout = org.apache.hadoop.gateway.audit.log4j.layout.AuditLayout', u'knox_gateway_log_maxfilesize': u'256', u'knox_gateway_log_maxbackupindex': u'20'}, u'activity-zeppelin-site': {u'zeppelin.server.port': u'9060', u'zeppelin.ssl.truststore.password': u'admin', u'zeppelin.interpreters': u'org.apache.zeppelin.phoenix.PhoenixInterpreter', u'zeppelin.server.context.path': u'/', u'zeppelin.war.tempdir': u'/var/lib/smartsense/activity-explorer/webapp', u'zeppelin.ssl.truststore.path': u'/var/lib/smartsense/activity-explorer/truststore', u'zeppelin.notebook.dir': u'/var/lib/smartsense/activity-explorer/notebook', u'zeppelin.ssl.keystore.password': u'admin', u'zeppelin.ssl.keystore.path': u'/var/lib/smartsense/activity-explorer/keystore', u'zeppelin.server.addr': u'0.0.0.0', u'zeppelin.notebook.cron.enable': u'true', u'zeppelin.ssl.client.auth': u'false', u'zeppelin.interpreter.dir': u'/usr/hdp/share/hst/activity-explorer/interpreter', u'zeppelin.ssl.keystore.type': u'JKS', u'zeppelin.ssl.key.manager.password': u'admin', u'zeppelin.anonymous.allowed': u'false', u'zeppelin.ssl.truststore.type': u'JKS', u'zeppelin.ssl': u'false', u'zeppelin.notebook.storage': u'org.apache.zeppelin.notebook.repo.VFSNotebookRepo', u'zeppelin.websocket.max.text.message.size': u'1024000', u'zeppelin.interpreter.connect.timeout': u'30000', u'zeppelin.notebook.homescreen.hide': u'false', u'zeppelin.server.allowed.origins': u'*'}, u'spark2-defaults': {u'spark.shuffle.file.buffer': u'1m', u'spark.yarn.historyServer.address': u'{{spark_history_server_host}}:{{spark_history_ui_port}}', u'spark.driver.extraLibraryPath': u'{{spark_hadoop_lib_native}}', u'spark.executor.extraJavaOptions': u'-XX:+UseNUMA', u'spark.master': u'yarn', u'spark.sql.autoBroadcastJoinThreshold': u'26214400', u'spark.eventLog.dir': u'hdfs:///spark2-history/', u'spark.history.kerberos.keytab': u'none', u'spark.sql.hive.metastore.jars': u'/usr/hdp/current/spark2-client/standalone-metastore/*', u'spark.acls.enable': u'true', u'spark.shuffle.unsafe.file.output.buffer': u'5m', u'spark.io.compression.lz4.blockSize': u'128kb', u'spark.eventLog.enabled': u'true', u'spark.history.ui.admin.acls': u'', u'spark.executor.extraLibraryPath': u'{{spark_hadoop_lib_native}}', u'spark.shuffle.io.serverThreads': u'128', u'spark.history.fs.logDirectory': u'hdfs:///spark2-history/', u'spark.history.fs.cleaner.maxAge': u'90d', u'spark.history.fs.cleaner.enabled': u'true', u'spark.history.kerberos.principal': u'none', u'spark.sql.orc.impl': u'native', u'spark.yarn.queue': u'default', u'spark.history.ui.acls.enable': u'true', u'spark.sql.statistics.fallBackToHdfs': u'true', u'spark.history.provider': u'org.apache.spark.deploy.history.FsHistoryProvider', u'spark.history.ui.port': u'18081', u'spark.admin.acls': u'', u'spark.unsafe.sorter.spill.reader.buffer.size': u'1m', u'spark.sql.hive.convertMetastoreOrc': u'true', u'spark.sql.orc.filterPushdown': u'true', u'spark.sql.hive.metastore.version': u'3.0', u'spark.shuffle.io.backLog': u'8192', u'spark.history.fs.cleaner.interval': u'7d', u'spark.sql.warehouse.dir': u'/apps/spark/warehouse'}, u'hive-log4j2': {u'content': u'\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# "License"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nstatus = INFO\nname = HiveLog4j2\npackages = org.apache.hadoop.hive.ql.log\n\n# list of properties\nproperty.hive.log.level = {{hive_log_level}}\nproperty.hive.root.logger = DRFA\nproperty.hive.log.dir = ${sys:java.io.tmpdir}/${sys:user.name}\nproperty.hive.log.file = hive.log\n\n# list of all appenders\nappenders = console, DRFA\n\n# console appender\nappender.console.type = Console\nappender.console.name = console\nappender.console.target = SYSTEM_ERR\nappender.console.layout.type = PatternLayout\nappender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} [%t]: %p %c{2}: %m%n\n\n# daily rolling file appender\nappender.DRFA.type = RollingFile\nappender.DRFA.name = DRFA\nappender.DRFA.fileName = ${sys:hive.log.dir}/${sys:hive.log.file}\n# Use %pid in the filePattern to append process-id@host-name to the filename if you want separate log files for different CLI session\nappender.DRFA.filePattern = ${sys:hive.log.dir}/${sys:hive.log.file}.%d{yyyy-MM-dd}_%i.gz\nappender.DRFA.layout.type = PatternLayout\nappender.DRFA.layout.pattern = %d{ISO8601} %-5p [%t]: %c{2} (%F:%M(%L)) - %m%n\nappender.DRFA.policies.type = Policies\nappender.DRFA.policies.time.type = TimeBasedTriggeringPolicy\nappender.DRFA.policies.time.interval = 1\nappender.DRFA.policies.time.modulate = true\nappender.DRFA.strategy.type = DefaultRolloverStrategy\nappender.DRFA.strategy.max = {{hive2_log_maxbackupindex}}\nappender.DRFA.policies.fsize.type = SizeBasedTriggeringPolicy\nappender.DRFA.policies.fsize.size = {{hive2_log_maxfilesize}}MB\n\n# list of all loggers\nloggers = NIOServerCnxn, ClientCnxnSocketNIO, DataNucleus, Datastore, JPOX\n\nlogger.NIOServerCnxn.name = org.apache.zookeeper.server.NIOServerCnxn\nlogger.NIOServerCnxn.level = WARN\n\nlogger.ClientCnxnSocketNIO.name = org.apache.zookeeper.ClientCnxnSocketNIO\nlogger.ClientCnxnSocketNIO.level = WARN\n\nlogger.DataNucleus.name = DataNucleus\nlogger.DataNucleus.level = ERROR\n\nlogger.Datastore.name = Datastore\nlogger.Datastore.level = ERROR\n\nlogger.JPOX.name = JPOX\nlogger.JPOX.level = ERROR\n\n# root logger\nrootLogger.level = ${sys:hive.log.level}\nrootLogger.appenderRefs = root\nrootLogger.appenderRef.root.ref = ${sys:hive.root.logger}', u'hive2_log_maxfilesize': u'256', u'hive2_log_maxbackupindex': u'30'}, u'zeppelin-log4j-properties': {u'log4j_properties_content': u'\nlog4j.rootLogger = INFO, dailyfile\nlog4j.appender.stdout = org.apache.log4j.ConsoleAppender\nlog4j.appender.stdout.layout = org.apache.log4j.PatternLayout\nlog4j.appender.stdout.layout.ConversionPattern=%5p [%d{ISO8601}] ({%t} %F[%M]:%L) - %m%n\nlog4j.appender.dailyfile.DatePattern=.yyyy-MM-dd\nlog4j.appender.dailyfile.Threshold = INFO\nlog4j.appender.dailyfile = org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.dailyfile.File = ${zeppelin.log.file}\nlog4j.appender.dailyfile.layout = org.apache.log4j.PatternLayout\nlog4j.appender.dailyfile.layout.ConversionPattern=%5p [%d{ISO8601}] ({%t} %F[%M]:%L) - %m%n'}, u'activity-zeppelin-shiro': {u'main.securityManager.sessionManager.globalSessionTimeout': u'86400000', u'main.credentialMatcher': u'org.apache.shiro.authc.credential.PasswordMatcher', u'users.admin': u'F2PQp5buaYjkyr3HvTF6', u'main.securityManager.sessionManager': u'$sessionManager', u'main.iniRealm.credentialsMatcher': u'$credentialMatcher', u'main.sessionManager': u'org.apache.shiro.web.session.mgt.DefaultWebSessionManager'}, u'ams-ssl-server': {u'ssl.server.keystore.location': u'/etc/security/serverKeys/keystore.jks', u'ssl.server.keystore.keypassword': u'bigdata', u'ssl.server.truststore.location': u'/etc/security/serverKeys/all.jks', u'ssl.server.keystore.password': u'bigdata', u'ssl.server.truststore.password': u'bigdata', u'ssl.server.truststore.type': u'jks', u'ssl.server.keystore.type': u'jks', u'ssl.server.truststore.reload.interval': u'10000'}, u'tez-site': {u'tez.history.logging.proto-base-dir': u'/warehouse/tablespace/external/hive/sys.db', u'tez.task.max-events-per-heartbeat': u'500', u'tez.task.launch.cmd-opts': u'-XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB{{heap_dump_opts}}', u'tez.runtime.compress': u'true', u'tez.runtime.io.sort.mb': u'7299', u'tez.runtime.shuffle.fetch.buffer.percent': u'0.6', u'tez.runtime.convert.user-payload.to.history-text': u'false', u'tez.generate.debug.artifacts': u'false', u'tez.am.tez-ui.history-url.template': u'__HISTORY_URL_BASE__?viewPath=%2F%23%2Ftez-app%2F__APPLICATION_ID__', u'tez.am.view-acls': u'*', u'tez.am.log.level': u'INFO', u'tez.counters.max.groups': u'3000', u'tez.task.get-task.sleep.interval-ms.max': u'200', u'tez.counters.max': u'10000', u'tez.shuffle-vertex-manager.max-src-fraction': u'0.4', u'tez.runtime.unordered.output.buffer.size-mb': u'2073', u'tez.queue.name': u'default', u'tez.task.resource.memory.mb': u'27648', u'tez.history.logging.service.class': u'org.apache.tez.dag.history.logging.proto.ProtoHistoryLoggingService', u'tez.runtime.optimize.local.fetch': u'true', u'tez.am.launch.cmd-opts': u'-XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB{{heap_dump_opts}}', u'tez.task.am.heartbeat.counter.interval-ms.max': u'4000', u'tez.am.max.app.attempts': u'2', u'yarn.timeline-service.enabled': u'false', u'tez.am.launch.env': u'LD_LIBRARY_PATH=/usr/hdp/${hdp.version}/hadoop/lib/native:/usr/hdp/${hdp.version}/hadoop/lib/native/Linux-{{architecture}}-64', u'tez.am.container.idle.release-timeout-max.millis': u'20000', u'tez.use.cluster.hadoop-libs': u'false', u'tez.history.logging.timeline-cache-plugin.old-num-dags-per-group': u'5', u'tez.am.launch.cluster-default.cmd-opts': u'-server -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}', u'tez.am.container.idle.release-timeout-min.millis': u'10000', u'tez.am.java.opts': u'-server -Xmx22118m -Djava.net.preferIPv4Stack=true', u'tez.runtime.sorter.class': u'PIPELINED', u'tez.runtime.compress.codec': u'org.apache.hadoop.io.compress.SnappyCodec', u'tez.task.launch.cluster-default.cmd-opts': u'-server -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}', u'tez.task.launch.env': u'LD_LIBRARY_PATH=/usr/hdp/${hdp.version}/hadoop/lib/native:/usr/hdp/${hdp.version}/hadoop/lib/native/Linux-{{architecture}}-64', u'tez.am.container.reuse.enabled': u'true', u'tez.session.am.dag.submit.timeout.secs': u'600', u'tez.grouping.min-size': u'16777216', u'tez.grouping.max-size': u'1073741824', u'tez.session.client.timeout.secs': u'-1', u'tez.cluster.additional.classpath.prefix': u'/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure', u'tez.lib.uris': u'/hdp/apps/${hdp.version}/tez/tez.tar.gz', u'tez.staging-dir': u'/tmp/${user.name}/staging', u'tez.am.am-rm.heartbeat.interval-ms.max': u'250', u'tez.am.maxtaskfailures.per.node': u'10', u'tez.am.resource.memory.mb': u'27648', u'tez.runtime.shuffle.memory.limit.percent': u'0.25', u'tez.am.container.reuse.non-local-fallback.enabled': u'false', u'tez.am.container.reuse.locality.delay-allocation-millis': u'250', u'tez.am.container.reuse.rack-fallback.enabled': u'true', u'tez.runtime.pipelined.sorter.sort.threads': u'2', u'tez.grouping.split-waves': u'1.7', u'tez.shuffle-vertex-manager.min-src-fraction': u'0.2', u'tez.task.generate.counters.per.io': u'true'}, u'anonymization-rules': {u'anonymization-rules-content': u'{\n "rules":[\n {\n "name": "IP Address",\n "description": "Anonymize IP addresses like 123.123.12.34 from all non-binary files",\n "rule_id": "Pattern",\n "patterns": ["(?![\\\\-])((?<![a-z0-9\\\\.])[0-9]{1,3}\\\\.[0-9]{1,3}\\\\.[0-9]{1,3}\\\\.[0-9]{1,3}(?!\\\\.[0-9])(?![a-z0-9]))"],\n "exclude_files": ["hdp-select*.*", "*version.txt"]\n },\n {\n "name": "Domain Names",\n "rule_id": "Domain"\n },\n {\n "name": "File Names",\n "rule_id": "FileName",\n "description": "Anonymize file names that have domain names and/or ip addresses",\n "include_files": ["*.log*", "*.out*"],\n "shared": true\n },\n {\n "name": "SSN",\n "description": "Anonymize social security numbers in format xxx-xx-xxxx from .log and .out files",\n "rule_id": "Pattern",\n "patterns": ["(?<![0-9x])([0-9x]{3}-[0-9x]{2}-[0-9]{4})(?![0-9x])"],\n "include_files": ["*.log*", "*.out*"],\n "exclude_files" : ["hst.log*", "hst.out*"],\n "shared": false\n },\n {\n "name": "Credit Card Numbers",\n "description": "Anonymize credit card numbers from .log and .out files",\n "rule_id": "Pattern",\n "patterns": ["(?<![0-9x])(18|21|3[04678]|4[0-9x]|5[1-5]|60|65)[0-9x]{2}[- ]([0-9x]{4}[- ]){2}[0-9]{0,4}(?![0-9x])"],\n "extract": "(?<![0-9x])([0-9x -]+)(?![0-9x])",\n "include_files": ["*.log*", "*.out*"],\n "exclude_files" : ["hst.log.*", "hst.out"],\n "shared": false\n },\n {\n "name": "email",\n "description": "Anonymize based on standard email pattern from all files except metadata.json which is used by SmartSense to send bundle notifications",\n "rule_id": "Pattern",\n "patterns": ["(?<![a-z0-9._%+-])[a-z0-9._%+-]+@[a-z0-9.-]+\\\\.[a-z]{2,6}(?![a-z0-9._%+-])$?"],\n "exclude_files" : ["metadata.json"],\n "shared": true\n },\n {\n "name": "Core Site S3 Credentials",\n "description": "Anonymize the value of properties from core-site.xml that might contain S3 credentials",\n "rule_id": "Property",\n "properties": ["fs.s3a.session.token","fs.s3a.proxy.host","fs.s3a.proxy.username"],\n "include_files": ["core-site.xml"],\n "action" : "REPLACE",\n "replace_value": "Hidden"\n },\n {\n "name": "Password Configurations",\n "description": "Anonymize various password related properties from configuration files. Properties and configuration files are listed below",\n "rule_id": "Property",\n "properties": [".*password.*", ".*awsAccessKeyId.*", ".*awsSecretAccessKey.*", "fs.azure.account.key.*", "ranger.service.https.attrib.keystore.pass","https.attrib.keystorePass", "HTTPS_KEYSTORE_PASS"],\n "include_files": ["*.xml", "*.properties", "*.yaml", "*.ini", "*.json"],\n "exclude_files" : ["capacity-scheduler.xml"],\n "shared": false\n },\n {\n "name": "KNOX LDAP Password",\n "description": "Anonymize KNOX LDAP passwords from topology configurations xml",\n "rule_id": "XPATH",\n "paths": ["//name[contains(text(),\'Password\')]/following-sibling::value[1]/text()"],\n "include_files": ["topologies/*.xml"],\n "parentNode": "param",\n "shared": false\n },\n {\n "name": "Ranger KMS Oozie Ganglia Falcon Passwords",\n "description": "Anomymize various password related properties for multiple services. Properties are listed below",\n "rule_id": "Pattern",\n "patterns": ["oozie.https.keystore.pass=([^\\\\s]*)", "OOZIE_HTTPS_KEYSTORE_PASS=([^\\\\s]*)", "ganglia_password=([^\\\\s]*)", "javax.jdo.option.ConnectionPassword=([^\\\\s]*)","KMS_SSL_KEYSTORE_PASS=([^\\\\s]*)","falcon.statestore.jdbc.password=([^\\\\s]*)"],\n "extract": "=([^\\\\s]*)",\n "include_files": ["java_process.txt", "pid.txt", "ambari-agent.log", "oozie-env.cmd", "hive_set_v.txt", "beeline_set_v.txt", "process_list.txt", "kms-env.sh", "statestore.credentials"],\n "shared": false\n },\n {\n "name": "MAC Addresses",\n "description": "Anonymize MAC addresses like ab:12:3c:44:5d:6e from network_info.txt",\n "rule_id": "Pattern",\n "patterns": ["(([0-9a-f]{2}[:-]){5}[0-9a-f]{2})"],\n "extract": "([0-9a-f:-]{17})",\n "include_files": ["network_info.txt"],\n "shared": true\n },\n {\n "name":"IPv6 Addresses",\n "description":"Anonymize IPv6 addresses like inet6 ab10::g457:6xxx:xxxx:6c9b/64 from network_info.txt",\n "rule_id": "Pattern",\n "patterns": ["inet6 addr:\\\\s((([\\\\da-f:\\\\/\\\\d]))*)"],\n "extract": ":\\\\s((([\\\\da-f:\\\\/\\\\d]))*)",\n "include_files": ["network_info.txt"],\n "shared":true\n },\n {\n "name": "Zeppelin Interpreter Passwords",\n "description": "Anonymize password related properties from zeppelin interpreter.json",\n "rule_id": "JSONPATH",\n "paths": ["$.interpreterSettings..properties.[\'hive.password\',\'phoenix.password\',\'default.password\',\'spark2.password\',\'spark.password\',\'psql.password\',\'hive_interactive.password\']"],\n "include_files": ["interpreter.json"],\n "shared": false\n },\n {\n "name": "Zeppelin Interpreter Passwords",\n "description": "Anonymize password related properties from zeppelin interpreter.json",\n "rule_id": "JSONPATH",\n "paths": ["$.interpreterSettings..properties.*[?(@.name=~/.*password.*/i)].value"],\n "include_files": ["interpreter.json"],\n "shared": false\n }\n ]\n}'}, u'hiveserver2-site': {u'hive.metastore.metrics.enabled': u'true', u'hive.security.authorization.enabled': u'false', u'hive.server2.metrics.enabled': u'true', u'hive.service.metrics.hadoop2.component': u'hiveserver2', u'hive.service.metrics.reporter': u'HADOOP2'}, u'ranger-hive-plugin-properties': {}, u'activity-log4j': {u'activity_log_dir': u'/var/log/smartsense-activity', u'activity_max_file_size': u'30', u'activity_max_backup_index': u'10', u'activity-log4j-content': u'\n# Copyright (c) 2011-2018, Hortonworks Inc. All rights reserved.\n# Except as expressly permitted in a written agreement between you\n# or your company and Hortonworks, Inc, any use, reproduction,\n# modification, redistribution, sharing, lending or other exploitation\n# of all or any part of the contents of this file is strictly prohibited.\n\n# Define some default values that can be overridden by system properties\n# Root logger option\nlog4j.rootLogger=INFO,file\n\nlog4j.appender.file=org.apache.log4j.RollingFileAppender\nlog4j.appender.file.File={{activity_log_dir}}/${log.file.name}\nlog4j.appender.file.MaxFileSize={{activity_max_file_size}}MB\nlog4j.appender.file.MaxBackupIndex={{activity_max_backup_index}}\nlog4j.appender.file.layout=org.apache.log4j.PatternLayout\nlog4j.appender.file.layout.ConversionPattern=%d{ISO8601} %5p [%t] %c{1}:%L - %m%n'}, u'core-site': {u'net.topology.script.file.name': u'/etc/hadoop/conf/topology_script.py', u'hadoop.proxyuser.hdfs.groups': u'*', u'hadoop.security.instrumentation.requires.admin': u'false', u'fs.s3a.fast.upload.buffer': u'disk', u'hadoop.proxyuser.zeppelin.hosts': u'*', u'fs.s3a.multipart.size': u'67108864', u'fs.trash.interval': u'360', u'fs.azure.user.agent.prefix': u'User-Agent: APN/1.0 Hortonworks/1.0 HDP/{{version}}', u'hadoop.proxyuser.hive.groups': u'*', u'hadoop.http.authentication.simple.anonymous.allowed': u'true', u'io.compression.codecs': u'org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec', u'hadoop.proxyuser.zeppelin.groups': u'*', u'hadoop.proxyuser.root.groups': u'*', u'hadoop.http.cross-origin.allowed-headers': u'X-Requested-With,Content-Type,Accept,Origin,WWW-Authenticate,Accept-Encoding,Transfer-Encoding', u'hadoop.proxyuser.livy.hosts': u'*', u'ipc.client.idlethreshold': u'8000', u'io.file.buffer.size': u'131072', u'fs.s3a.user.agent.prefix': u'User-Agent: APN/1.0 Hortonworks/1.0 HDP/{{version}}', u'io.serializations': u'org.apache.hadoop.io.serializer.WritableSerialization', u'hadoop.security.authentication': u'simple', u'hadoop.http.filter.initializers': u'org.apache.hadoop.security.AuthenticationFilterInitializer,org.apache.hadoop.security.HttpCrossOriginFilterInitializer', u'hadoop.proxyuser.root.hosts': u'*', u'mapreduce.jobtracker.webinterface.trusted': u'false', u'hadoop.http.cross-origin.allowed-methods': u'GET,PUT,POST,OPTIONS,HEAD,DELETE', u'fs.s3a.fast.upload': u'true', u'hadoop.http.cross-origin.max-age': u'1800', u'hadoop.proxyuser.hdfs.hosts': u'*', u'hadoop.proxyuser.hive.hosts': u'{host}', u'fs.defaultFS': u'hdfs://{host}:8020', u'hadoop.proxyuser.livy.groups': u'*', u'ha.failover-controller.active-standby-elector.zk.op.retries': u'120', u'hadoop.security.authorization': u'false', u'ipc.server.tcpnodelay': u'true', u'ipc.client.connect.max.retries': u'50', u'hadoop.security.auth_to_local': u'DEFAULT', u'hadoop.http.cross-origin.allowed-origins': u'*', u'ipc.client.connection.maxidletime': u'30000'}, u'yarn-hbase-site': {u'hbase.master.info.bindAddress': u'0.0.0.0', u'hbase.master.wait.on.regionservers.timeout': u'30000', u'hbase.client.keyvalue.maxsize': u'1048576', u'hbase.hstore.compactionThreshold': u'3', u'hbase.hregion.majorcompaction.jitter': u'0.50', u'hbase.client.retries.number': u'7', u'hbase.client.scanner.caching': u'100', u'hbase.regionserver.executor.openregion.threads': u'20', u'hbase.rootdir': u'/atsv2/hbase/data', u'hbase.rpc.timeout': u'90000', u'hbase.regionserver.handler.count': u'30', u'hbase.hregion.majorcompaction': u'604800000', u'hbase.rpc.protection': u'authentication', u'hbase.bucketcache.size': u'', u'hbase.bucketcache.percentage.in.combinedcache': u'', u'hbase.hregion.memstore.flush.size': u'134217728', u'hbase.superuser': u'yarn', u'hbase.zookeeper.property.clientPort': u'{{zookeeper_clientPort}}', u'hbase.hstore.compaction.max': u'10', u'hbase.master.namespace.init.timeout': u'2400000', u'hbase.master.ui.readonly': u'false', u'zookeeper.session.timeout': u'90000', u'hbase.regionserver.global.memstore.size': u'0.4', u'hbase.tmp.dir': u'/tmp/hbase-${user.name}', u'hbase.hregion.max.filesize': u'10737418240', u'hfile.block.cache.size': u'0.4', u'hbase.regionserver.port': u'17020', u'hbase.security.authentication': u'simple', u'hbase.hstore.blockingStoreFiles': u'10', u'hbase.master.info.port': u'17010', u'hbase.zookeeper.quorum': u'{{zookeeper_quorum_hosts}}', u'hbase.regionserver.info.port': u'17030', u'zookeeper.recovery.retry': u'6', u'zookeeper.znode.parent': u'/atsv2-hbase-unsecure', u'hbase.coprocessor.master.classes': u'', u'hbase.defaults.for.version.skip': u'true', u'hbase.master.port': u'17000', u'hbase.security.authorization': u'false', u'hbase.bucketcache.ioengine': u'', u'hbase.local.dir': u'${hbase.tmp.dir}/local', u'hbase.coprocessor.regionserver.classes': u'', u'hbase.cluster.distributed': u'true', u'hbase.hregion.memstore.mslab.enabled': u'true', u'dfs.domain.socket.path': u'/var/lib/hadoop-hdfs/dn_socket', u'hbase.coprocessor.region.classes': u'', u'hbase.zookeeper.useMulti': u'true', u'hbase.hregion.memstore.block.multiplier': u'4'}, u'knoxsso-topology': {u'content': u'\n <topology>\n <gateway>\n <provider>\n <role>webappsec</role>\n <name>WebAppSec</name>\n <enabled>true</enabled>\n <param><name>xframe.options.enabled</name><value>true</value></param>\n </provider>\n\n <provider>\n <role>authentication</role>\n <name>ShiroProvider</name>\n <enabled>true</enabled>\n <param>\n <name>sessionTimeout</name>\n <value>30</value>\n </param>\n <param>\n <name>redirectToUrl</name>\n <value>/gateway/knoxsso/knoxauth/login.html</value>\n </param>\n <param>\n <name>restrictedCookies</name>\n <value>rememberme,WWW-Authenticate</value>\n </param>\n <param>\n <name>main.ldapRealm</name>\n <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>\n </param>\n <param>\n <name>main.ldapContextFactory</name>\n <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>\n </param>\n <param>\n <name>main.ldapRealm.contextFactory</name>\n <value>$ldapContextFactory</value>\n </param>\n <param>\n <name>main.ldapRealm.userDnTemplate</name>\n <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>\n </param>\n <param>\n <name>main.ldapRealm.contextFactory.url</name>\n <value>ldap://localhost:33389</value>\n </param>\n <param>\n <name>main.ldapRealm.authenticationCachingEnabled</name>\n <value>false</value>\n </param>\n <param>\n <name>main.ldapRealm.contextFactory.authenticationMechanism</name>\n <value>simple</value>\n </param>\n <param>\n <name>urls./**</name>\n <value>authcBasic</value>\n </param>\n </provider>\n\n <provider>\n <role>identity-assertion</role>\n <name>Default</name>\n <enabled>true</enabled>\n </provider>\n </gateway>\n\n <application>\n <name>knoxauth</name>\n </application>\n\n <service>\n <role>KNOXSSO</role>\n <param>\n <name>knoxsso.cookie.secure.only</name>\n <value>false</value>\n </param>\n <param>\n <name>knoxsso.token.ttl</name>\n <value>30000</value>\n </param>\n </service>\n\n </topology>'}, u'hiveserver2-interactive-site': {u'hive.metastore.metrics.enabled': u'true', u'hive.server2.metrics.enabled': u'true', u'hive.async.log.enabled': u'false', u'hive.service.metrics.hadoop2.component': u'hiveserver2', u'hive.service.metrics.reporter': u'HADOOP2'}, u'capacity-scheduler': {u'yarn.scheduler.capacity.node-locality-delay': u'40', u'yarn.scheduler.capacity.root.accessible-node-labels': u'*', u'yarn.scheduler.capacity.root.capacity': u'100', u'yarn.scheduler.capacity.maximum-am-resource-percent': u'1', u'yarn.scheduler.capacity.maximum-applications': u'10000', u'yarn.scheduler.capacity.root.default.user-limit-factor': u'1', u'yarn.scheduler.capacity.root.default.maximum-capacity': u'100', u'yarn.scheduler.capacity.root.acl_submit_applications': u'*', u'yarn.scheduler.capacity.root.default.acl_submit_applications': u'*', u'yarn.scheduler.capacity.root.default.state': u'RUNNING', u'yarn.scheduler.capacity.root.default.capacity': u'100', u'yarn.scheduler.capacity.root.acl_administer_queue': u'*', u'yarn.scheduler.capacity.root.priority': u'0', u'yarn.scheduler.capacity.root.queues': u'default', u'yarn.scheduler.capacity.resource-calculator': u'org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator', u'yarn.scheduler.capacity.root.default.acl_administer_jobs': u'*', u'yarn.scheduler.capacity.queue-mappings-override.enable': u'false', u'yarn.scheduler.capacity.root.default.priority': u'0'}, u'zoo.cfg': {u'clientPort': u'2181', u'autopurge.purgeInterval': u'24', u'syncLimit': u'5', u'dataDir': u'/hadoop/zookeeper', u'initLimit': u'10', u'tickTime': u'3000', u'autopurge.snapRetainCount': u'30'}, u'ams-log4j': {u'content': u'\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# "License"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# Define some default values that can be overridden by system properties\nams.log.dir=.\nams.log.file=ambari-metrics-collector.log\n\n# Root logger option\nlog4j.rootLogger=INFO,file\n\n# Direct log messages to a log file\nlog4j.appender.file=org.apache.log4j.RollingFileAppender\nlog4j.appender.file.File=${ams.log.dir}/${ams.log.file}\nlog4j.appender.file.MaxFileSize={{ams_log_max_backup_size}}MB\nlog4j.appender.file.MaxBackupIndex={{ams_log_number_of_backup_files}}\nlog4j.appender.file.layout=org.apache.log4j.PatternLayout\nlog4j.appender.file.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n', u'ams_log_max_backup_size': u'80', u'ams_log_number_of_backup_files': u'60'}, u'hive-exec-log4j2': {u'content': u'\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# "License"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nstatus = INFO\nname = HiveExecLog4j2\npackages = org.apache.hadoop.hive.ql.log\n\n# list of properties\nproperty.hive.log.level = {{hive_log_level}}\nproperty.hive.root.logger = FA\nproperty.hive.query.id = hadoop\nproperty.hive.log.dir = ${sys:java.io.tmpdir}/${sys:user.name}\nproperty.hive.log.file = ${sys:hive.query.id}.log\n\n# list of all appenders\nappenders = console, FA\n\n# console appender\nappender.console.type = Console\nappender.console.name = console\nappender.console.target = SYSTEM_ERR\nappender.console.layout.type = PatternLayout\nappender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} [%t]: %p %c{2}: %m%n\n\n# simple file appender\nappender.FA.type = File\nappender.FA.name = FA\nappender.FA.fileName = ${sys:hive.log.dir}/${sys:hive.log.file}\nappender.FA.layout.type = PatternLayout\nappender.FA.layout.pattern = %d{ISO8601} %-5p [%t]: %c{2} (%F:%M(%L)) - %m%n\n\n# list of all loggers\nloggers = NIOServerCnxn, ClientCnxnSocketNIO, DataNucleus, Datastore, JPOX\n\nlogger.NIOServerCnxn.name = org.apache.zookeeper.server.NIOServerCnxn\nlogger.NIOServerCnxn.level = WARN\n\nlogger.ClientCnxnSocketNIO.name = org.apache.zookeeper.ClientCnxnSocketNIO\nlogger.ClientCnxnSocketNIO.level = WARN\n\nlogger.DataNucleus.name = DataNucleus\nlogger.DataNucleus.level = ERROR\n\nlogger.Datastore.name = Datastore\nlogger.Datastore.level = ERROR\n\nlogger.JPOX.name = JPOX\nlogger.JPOX.level = ERROR\n\n# root logger\nrootLogger.level = ${sys:hive.log.level}\nrootLogger.appenderRefs = root\nrootLogger.appenderRef.root.ref = ${sys:hive.root.logger}'}, u'zookeeper-env': {u'zk_server_heapsize': u'1024m', u'zookeeper_keytab_path': u'', u'zk_user': u'zookeeper', u'zk_log_dir': u'/var/log/zookeeper', u'content': u'\nexport JAVA_HOME={{java64_home}}\nexport ZOOKEEPER_HOME={{zk_home}}\nexport ZOO_LOG_DIR={{zk_log_dir}}\nexport ZOOPIDFILE={{zk_pid_file}}\nexport SERVER_JVMFLAGS={{zk_server_heapsize}}\nexport JAVA=$JAVA_HOME/bin/java\nexport CLASSPATH=$CLASSPATH:/usr/share/zookeeper/*\n\n{% if security_enabled %}\nexport SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Djava.security.auth.login.config={{zk_server_jaas_file}}"\nexport CLIENT_JVMFLAGS="$CLIENT_JVMFLAGS -Djava.security.auth.login.config={{zk_client_jaas_file}}"\n{% endif %}', u'zk_pid_dir': u'/var/run/zookeeper', u'zookeeper_principal_name': u''}, u'ams-hbase-log4j': {u'content': u'\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# "License"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n# Define some default values that can be overridden by system properties\nhbase.root.logger=INFO,console\nhbase.security.logger=INFO,console\nhbase.log.dir=.\nhbase.log.file=hbase.log\n\n# Define the root logger to the system property "hbase.root.logger".\nlog4j.rootLogger=${hbase.root.logger}\n\n# Logging Threshold\nlog4j.threshold=ALL\n\n#\n# Daily Rolling File Appender\n#\nlog4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}\n\n# Rollver at midnight\nlog4j.appender.DRFA.DatePattern=.yyyy-MM-dd\n\n# 30-day backup\n#log4j.appender.DRFA.MaxBackupIndex=30\nlog4j.appender.DRFA.layout=org.apache.log4j.PatternLayout\n\n# Pattern format: Date LogLevel LoggerName LogMessage\nlog4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n# Rolling File Appender properties\nhbase.log.maxfilesize={{ams_hbase_log_maxfilesize}}MB\nhbase.log.maxbackupindex={{ams_hbase_log_maxbackupindex}}\n\n# Rolling File Appender\nlog4j.appender.RFA=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}\n\nlog4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}\nlog4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}\n\nlog4j.appender.RFA.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n#\n# Security audit appender\n#\nhbase.security.log.file=SecurityAuth.audit\nhbase.security.log.maxfilesize={{ams_hbase_security_log_maxfilesize}}MB\nhbase.security.log.maxbackupindex={{ams_hbase_security_log_maxbackupindex}}\nlog4j.appender.RFAS=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file}\nlog4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize}\nlog4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex}\nlog4j.appender.RFAS.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n\nlog4j.category.SecurityLogger=${hbase.security.logger}\nlog4j.additivity.SecurityLogger=false\n#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE\n\n#\n# Null Appender\n#\nlog4j.appender.NullAppender=org.apache.log4j.varia.NullAppender\n\n#\n# console\n# Add "console" to rootlogger above if you want to use this\n#\nlog4j.appender.console=org.apache.log4j.ConsoleAppender\nlog4j.appender.console.target=System.err\nlog4j.appender.console.layout=org.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n# Custom Logging levels\n\nlog4j.logger.org.apache.zookeeper=INFO\n#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG\nlog4j.logger.org.apache.hadoop.hbase=INFO\n# Make these two classes INFO-level. Make them DEBUG to see more zk debug.\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=INFO\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=INFO\n#log4j.logger.org.apache.hadoop.dfs=DEBUG\n# Set this class to log INFO only otherwise its OTT\n# Enable this to get detailed connection error/retry logging.\n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=TRACE\n\n\n# Uncomment this line to enable tracing on _every_ RPC call (this can be a lot of output)\n#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG\n\n# Uncomment the below if you want to remove logging of client region caching\'\n# and scan of .META. messages\n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO\n# log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO', u'ams_hbase_log_maxfilesize': u'256', u'ams_hbase_security_log_maxbackupindex': u'20', u'ams_hbase_log_maxbackupindex': u'20', u'ams_hbase_security_log_maxfilesize': u'256'}, u'cluster-env': {u'security_enabled': u'false', u'hide_yarn_memory_widget': u'false', u'stack_name': u'HDP', u'enable_external_ranger': u'false', u'override_uid': u'true', u'fetch_nonlocal_groups': u'true', u'one_dir_per_partition': u'false', u'agent_mounts_ignore_list': u'', u'repo_ubuntu_template': u'{{package_type}} {{base_url}} {{components}}', u'stack_packages': u'{\n "HDP": {\n "stack-select": {\n "ACCUMULO": {\n "ACCUMULO_CLIENT": {\n "STACK-SELECT-PACKAGE": "accumulo-client",\n "INSTALL": [\n "accumulo-client"\n ],\n "PATCH": [\n "accumulo-client"\n ],\n "STANDARD": [\n "accumulo-client"\n ]\n },\n "ACCUMULO_GC": {\n "STACK-SELECT-PACKAGE": "accumulo-gc",\n "INSTALL": [\n "accumulo-gc"\n ],\n "PATCH": [\n "accumulo-gc"\n ],\n "STANDARD": [\n "accumulo-gc",\n "accumulo-client"\n ]\n },\n "ACCUMULO_MASTER": {\n "STACK-SELECT-PACKAGE": "accumulo-master",\n "INSTALL": [\n "accumulo-master"\n ],\n "PATCH": [\n "accumulo-master"\n ],\n "STANDARD": [\n "accumulo-master",\n "accumulo-client"\n ]\n },\n "ACCUMULO_MONITOR": {\n "STACK-SELECT-PACKAGE": "accumulo-monitor",\n "INSTALL": [\n "accumulo-monitor"\n ],\n "PATCH": [\n "accumulo-monitor"\n ],\n "STANDARD": [\n "accumulo-monitor",\n "accumulo-client"\n ]\n },\n "ACCUMULO_TRACER": {\n "STACK-SELECT-PACKAGE": "accumulo-tracer",\n "INSTALL": [\n "accumulo-tracer"\n ],\n "PATCH": [\n "accumulo-tracer"\n ],\n "STANDARD": [\n "accumulo-tracer",\n "accumulo-client"\n ]\n },\n "ACCUMULO_TSERVER": {\n "STACK-SELECT-PACKAGE": "accumulo-tablet",\n "INSTALL": [\n "accumulo-tablet"\n ],\n "PATCH": [\n "accumulo-tablet"\n ],\n "STANDARD": [\n "accumulo-tablet",\n "accumulo-client"\n ]\n }\n },\n "ATLAS": {\n "ATLAS_CLIENT": {\n "STACK-SELECT-PACKAGE": "atlas-client",\n "INSTALL": [\n "atlas-client"\n ],\n "PATCH": [\n "atlas-client"\n ],\n "STANDARD": [\n "atlas-client"\n ]\n },\n "ATLAS_SERVER": {\n "STACK-SELECT-PACKAGE": "atlas-server",\n "INSTALL": [\n "atlas-server"\n ],\n "PATCH": [\n "atlas-server"\n ],\n "STANDARD": [\n "atlas-server"\n ]\n }\n },\n "DRUID": {\n "DRUID_COORDINATOR": {\n "STACK-SELECT-PACKAGE": "druid-coordinator",\n "INSTALL": [\n "druid-coordinator"\n ],\n "PATCH": [\n "druid-coordinator"\n ],\n "STANDARD": [\n "druid-coordinator"\n ]\n },\n "DRUID_OVERLORD": {\n "STACK-SELECT-PACKAGE": "druid-overlord",\n "INSTALL": [\n "druid-overlord"\n ],\n "PATCH": [\n "druid-overlord"\n ],\n "STANDARD": [\n "druid-overlord"\n ]\n },\n "DRUID_HISTORICAL": {\n "STACK-SELECT-PACKAGE": "druid-historical",\n "INSTALL": [\n "druid-historical"\n ],\n "PATCH": [\n "druid-historical"\n ],\n "STANDARD": [\n "druid-historical"\n ]\n },\n "DRUID_BROKER": {\n "STACK-SELECT-PACKAGE": "druid-broker",\n "INSTALL": [\n "druid-broker"\n ],\n "PATCH": [\n "druid-broker"\n ],\n "STANDARD": [\n "druid-broker"\n ]\n },\n "DRUID_MIDDLEMANAGER": {\n "STACK-SELECT-PACKAGE": "druid-middlemanager",\n "INSTALL": [\n "druid-middlemanager"\n ],\n "PATCH": [\n "druid-middlemanager"\n ],\n "STANDARD": [\n "druid-middlemanager"\n ]\n },\n "DRUID_ROUTER": {\n "STACK-SELECT-PACKAGE": "druid-router",\n "INSTALL": [\n "druid-router"\n ],\n "PATCH": [\n "druid-router"\n ],\n "STANDARD": [\n "druid-router"\n ]\n }\n },\n "HBASE": {\n "HBASE_CLIENT": {\n "STACK-SELECT-PACKAGE": "hbase-client",\n "INSTALL": [\n "hbase-client"\n ],\n "PATCH": [\n "hbase-client"\n ],\n "STANDARD": [\n "hbase-client",\n "phoenix-client",\n "hadoop-client"\n ]\n },\n "HBASE_MASTER": {\n "STACK-SELECT-PACKAGE": "hbase-master",\n "INSTALL": [\n "hbase-master"\n ],\n "PATCH": [\n "hbase-master"\n ],\n "STANDARD": [\n "hbase-master"\n ]\n },\n "HBASE_REGIONSERVER": {\n "STACK-SELECT-PACKAGE": "hbase-regionserver",\n "INSTALL": [\n "hbase-regionserver"\n ],\n "PATCH": [\n "hbase-regionserver"\n ],\n "STANDARD": [\n "hbase-regionserver"\n ]\n },\n "PHOENIX_QUERY_SERVER": {\n "STACK-SELECT-PACKAGE": "phoenix-server",\n "INSTALL": [\n "phoenix-server"\n ],\n "PATCH": [\n "phoenix-server"\n ],\n "STANDARD": [\n "phoenix-server"\n ]\n }\n },\n "HDFS": {\n "DATANODE": {\n "STACK-SELECT-PACKAGE": "hadoop-hdfs-datanode",\n "INSTALL": [\n "hadoop-hdfs-datanode"\n ],\n "PATCH": [\n "hadoop-hdfs-datanode"\n ],\n "STANDARD": [\n "hadoop-hdfs-datanode"\n ]\n },\n "HDFS_CLIENT": {\n "STACK-SELECT-PACKAGE": "hadoop-hdfs-client",\n "INSTALL": [\n "hadoop-hdfs-client"\n ],\n "PATCH": [\n "hadoop-hdfs-client"\n ],\n "STANDARD": [\n "hadoop-client"\n ]\n },\n "NAMENODE": {\n "STACK-SELECT-PACKAGE": "hadoop-hdfs-namenode",\n "INSTALL": [\n "hadoop-hdfs-namenode"\n ],\n "PATCH": [\n "hadoop-hdfs-namenode"\n ],\n "STANDARD": [\n "hadoop-hdfs-namenode"\n ]\n },\n "NFS_GATEWAY": {\n "STACK-SELECT-PACKAGE": "hadoop-hdfs-nfs3",\n "INSTALL": [\n "hadoop-hdfs-nfs3"\n ],\n "PATCH": [\n "hadoop-hdfs-nfs3"\n ],\n "STANDARD": [\n "hadoop-hdfs-nfs3"\n ]\n },\n "JOURNALNODE": {\n "STACK-SELECT-PACKAGE": "hadoop-hdfs-journalnode",\n "INSTALL": [\n "hadoop-hdfs-journalnode"\n ],\n "PATCH": [\n "hadoop-hdfs-journalnode"\n ],\n "STANDARD": [\n "hadoop-hdfs-journalnode"\n ]\n },\n "SECONDARY_NAMENODE": {\n "STACK-SELECT-PACKAGE": "hadoop-hdfs-secondarynamenode",\n "INSTALL": [\n "hadoop-hdfs-secondarynamenode"\n ],\n "PATCH": [\n "hadoop-hdfs-secondarynamenode"\n ],\n "STANDARD": [\n "hadoop-hdfs-secondarynamenode"\n ]\n },\n "ZKFC": {\n "STACK-SELECT-PACKAGE": "hadoop-hdfs-zkfc",\n "INSTALL": [\n "hadoop-hdfs-zkfc"\n ],\n "PATCH": [\n "hadoop-hdfs-zkfc"\n ],\n "STANDARD": [\n "hadoop-hdfs-zkfc"\n ]\n }\n },\n "HIVE": {\n "HIVE_METASTORE": {\n "STACK-SELECT-PACKAGE": "hive-metastore",\n "INSTALL": [\n "hive-metastore"\n ],\n "PATCH": [\n "hive-metastore"\n ],\n "STANDARD": [\n "hive-metastore"\n ]\n },\n "HIVE_SERVER": {\n "STACK-SELECT-PACKAGE": "hive-server2",\n "INSTALL": [\n "hive-server2"\n ],\n "PATCH": [\n "hive-server2"\n ],\n "STANDARD": [\n "hive-server2"\n ]\n },\n "HIVE_SERVER_INTERACTIVE": {\n "STACK-SELECT-PACKAGE": "hive-server2-hive",\n "INSTALL": [\n "hive-server2-hive"\n ],\n "PATCH": [\n "hive-server2-hive"\n ],\n "STANDARD": [\n "hive-server2-hive"\n ]\n },\n "HIVE_CLIENT": {\n "STACK-SELECT-PACKAGE": "hive-client",\n "INSTALL": [\n "hive-client"\n ],\n "PATCH": [\n "hive-client"\n ],\n "STANDARD": [\n "hadoop-client"\n ]\n }\n },\n "KAFKA": {\n "KAFKA_BROKER": {\n "STACK-SELECT-PACKAGE": "kafka-broker",\n "INSTALL": [\n "kafka-broker"\n ],\n "PATCH": [\n "kafka-broker"\n ],\n "STANDARD": [\n "kafka-broker"\n ]\n }\n },\n "KNOX": {\n "KNOX_GATEWAY": {\n "STACK-SELECT-PACKAGE": "knox-server",\n "INSTALL": [\n "knox-server"\n ],\n "PATCH": [\n "knox-server"\n ],\n "STANDARD": [\n "knox-server"\n ]\n }\n },\n "MAPREDUCE2": {\n "HISTORYSERVER": {\n "STACK-SELECT-PACKAGE": "hadoop-mapreduce-historyserver",\n "INSTALL": [\n "hadoop-mapreduce-historyserver"\n ],\n "PATCH": [\n "hadoop-mapreduce-historyserver"\n ],\n "STANDARD": [\n "hadoop-mapreduce-historyserver"\n ]\n },\n "MAPREDUCE2_CLIENT": {\n "STACK-SELECT-PACKAGE": "hadoop-mapreduce-client",\n "INSTALL": [\n "hadoop-mapreduce-client"\n ],\n "PATCH": [\n "hadoop-mapreduce-client"\n ],\n "STANDARD": [\n "hadoop-client"\n ]\n }\n },\n "OOZIE": {\n "OOZIE_CLIENT": {\n "STACK-SELECT-PACKAGE": "oozie-client",\n "INSTALL": [\n "oozie-client"\n ],\n "PATCH": [\n "oozie-client"\n ],\n "STANDARD": [\n "oozie-client"\n ]\n },\n "OOZIE_SERVER": {\n "STACK-SELECT-PACKAGE": "oozie-server",\n "INSTALL": [\n "oozie-client",\n "oozie-server"\n ],\n "PATCH": [\n "oozie-server",\n "oozie-client"\n ],\n "STANDARD": [\n "oozie-client",\n "oozie-server"\n ]\n }\n },\n "PIG": {\n "PIG": {\n "STACK-SELECT-PACKAGE": "pig-client",\n "INSTALL": [\n "pig-client"\n ],\n "PATCH": [\n "pig-client"\n ],\n "STANDARD": [\n "hadoop-client"\n ]\n }\n },\n "RANGER": {\n "RANGER_ADMIN": {\n "STACK-SELECT-PACKAGE": "ranger-admin",\n "INSTALL": [\n "ranger-admin"\n ],\n "PATCH": [\n "ranger-admin"\n ],\n "STANDARD": [\n "ranger-admin"\n ]\n },\n "RANGER_TAGSYNC": {\n "STACK-SELECT-PACKAGE": "ranger-tagsync",\n "INSTALL": [\n "ranger-tagsync"\n ],\n "PATCH": [\n "ranger-tagsync"\n ],\n "STANDARD": [\n "ranger-tagsync"\n ]\n },\n "RANGER_USERSYNC": {\n "STACK-SELECT-PACKAGE": "ranger-usersync",\n "INSTALL": [\n "ranger-usersync"\n ],\n "PATCH": [\n "ranger-usersync"\n ],\n "STANDARD": [\n "ranger-usersync"\n ]\n }\n },\n "RANGER_KMS": {\n "RANGER_KMS_SERVER": {\n "STACK-SELECT-PACKAGE": "ranger-kms",\n "INSTALL": [\n "ranger-kms"\n ],\n "PATCH": [\n "ranger-kms"\n ],\n "STANDARD": [\n "ranger-kms"\n ]\n }\n },\n "SPARK2": {\n "LIVY2_CLIENT": {\n "STACK-SELECT-PACKAGE": "livy2-client",\n "INSTALL": [\n "livy2-client"\n ],\n "PATCH": [\n "livy2-client"\n ],\n "STANDARD": [\n "livy2-client"\n ]\n },\n "LIVY2_SERVER": {\n "STACK-SELECT-PACKAGE": "livy2-server",\n "INSTALL": [\n "livy2-server"\n ],\n "PATCH": [\n "livy2-server"\n ],\n "STANDARD": [\n "livy2-server"\n ]\n },\n "SPARK2_CLIENT": {\n "STACK-SELECT-PACKAGE": "spark2-client",\n "INSTALL": [\n "spark2-client"\n ],\n "PATCH": [\n "spark2-client"\n ],\n "STANDARD": [\n "spark2-client"\n ]\n },\n "SPARK2_JOBHISTORYSERVER": {\n "STACK-SELECT-PACKAGE": "spark2-historyserver",\n "INSTALL": [\n "spark2-historyserver"\n ],\n "PATCH": [\n "spark2-historyserver"\n ],\n "STANDARD": [\n "spark2-historyserver"\n ]\n },\n "SPARK2_THRIFTSERVER": {\n "STACK-SELECT-PACKAGE": "spark2-thriftserver",\n "INSTALL": [\n "spark2-thriftserver"\n ],\n "PATCH": [\n "spark2-thriftserver"\n ],\n "STANDARD": [\n "spark2-thriftserver"\n ]\n }\n },\n "SQOOP": {\n "SQOOP": {\n "STACK-SELECT-PACKAGE": "sqoop-client",\n "INSTALL": [\n "sqoop-client"\n ],\n "PATCH": [\n "sqoop-client"\n ],\n "STANDARD": [\n "sqoop-client"\n ]\n }\n },\n "STORM": {\n "NIMBUS": {\n "STACK-SELECT-PACKAGE": "storm-nimbus",\n "INSTALL": [\n "storm-client",\n "storm-nimbus"\n ],\n "PATCH": [\n "storm-client",\n "storm-nimbus"\n ],\n "STANDARD": [\n "storm-client",\n "storm-nimbus"\n ]\n },\n "SUPERVISOR": {\n "STACK-SELECT-PACKAGE": "storm-supervisor",\n "INSTALL": [\n "storm-client",\n "storm-supervisor"\n ],\n "PATCH": [\n "storm-client",\n "storm-supervisor"\n ],\n "STANDARD": [\n "storm-client",\n "storm-supervisor"\n ]\n },\n "DRPC_SERVER": {\n "STACK-SELECT-PACKAGE": "storm-client",\n "INSTALL": [\n "storm-client"\n ],\n "PATCH": [\n "storm-client"\n ],\n "STANDARD": [\n "storm-client"\n ]\n },\n "STORM_UI_SERVER": {\n "STACK-SELECT-PACKAGE": "storm-client",\n "INSTALL": [\n "storm-client"\n ],\n "PATCH": [\n "storm-client"\n ],\n "STANDARD": [\n "storm-client"\n ]\n }\n },\n "SUPERSET": {\n "SUPERSET": {\n "STACK-SELECT-PACKAGE": "superset",\n "INSTALL": [\n "superset"\n ],\n "PATCH": [\n "superset"\n ],\n "STANDARD": [\n "superset"\n ]\n }\n },\n "TEZ": {\n "TEZ_CLIENT": {\n "STACK-SELECT-PACKAGE": "tez-client",\n "INSTALL": [\n "tez-client"\n ],\n "PATCH": [\n "tez-client"\n ],\n "STANDARD": [\n "hadoop-client"\n ]\n }\n },\n "YARN": {\n "APP_TIMELINE_SERVER": {\n "STACK-SELECT-PACKAGE": "hadoop-yarn-timelineserver",\n "INSTALL": [\n "hadoop-yarn-timelineserver"\n ],\n "PATCH": [\n "hadoop-yarn-timelineserver"\n ],\n "STANDARD": [\n "hadoop-yarn-timelineserver"\n ]\n },\n "TIMELINE_READER": {\n "STACK-SELECT-PACKAGE": "hadoop-yarn-timelinereader",\n "INSTALL": [\n "hadoop-yarn-timelinereader"\n ],\n "PATCH": [\n "hadoop-yarn-timelinereader"\n ],\n "STANDARD": [\n "hadoop-yarn-timelinereader"\n ]\n },\n "NODEMANAGER": {\n "STACK-SELECT-PACKAGE": "hadoop-yarn-nodemanager",\n "INSTALL": [\n "hadoop-yarn-nodemanager"\n ],\n "PATCH": [\n "hadoop-yarn-nodemanager"\n ],\n "STANDARD": [\n "hadoop-yarn-nodemanager"\n ]\n },\n "RESOURCEMANAGER": {\n "STACK-SELECT-PACKAGE": "hadoop-yarn-resourcemanager",\n "INSTALL": [\n "hadoop-yarn-resourcemanager"\n ],\n "PATCH": [\n "hadoop-yarn-resourcemanager"\n ],\n "STANDARD": [\n "hadoop-yarn-resourcemanager"\n ]\n },\n "YARN_CLIENT": {\n "STACK-SELECT-PACKAGE": "hadoop-yarn-client",\n "INSTALL": [\n "hadoop-yarn-client"\n ],\n "PATCH": [\n "hadoop-yarn-client"\n ],\n "STANDARD": [\n "hadoop-client"\n ]\n },\n "YARN_REGISTRY_DNS": {\n "STACK-SELECT-PACKAGE": "hadoop-yarn-registrydns",\n "INSTALL": [\n "hadoop-yarn-registrydns"\n ],\n "PATCH": [\n "hadoop-yarn-registrydns"\n ],\n "STANDARD": [\n "hadoop-yarn-registrydns"\n ]\n }\n },\n "ZEPPELIN": {\n "ZEPPELIN_MASTER": {\n "STACK-SELECT-PACKAGE": "zeppelin-server",\n "INSTALL": [\n "zeppelin-server"\n ],\n "PATCH": [\n "zeppelin-server"\n ],\n "STANDARD": [\n "zeppelin-server"\n ]\n }\n },\n "ZOOKEEPER": {\n "ZOOKEEPER_CLIENT": {\n "STACK-SELECT-PACKAGE": "zookeeper-client",\n "INSTALL": [\n "zookeeper-client"\n ],\n "PATCH": [\n "zookeeper-client"\n ],\n "STANDARD": [\n "zookeeper-client"\n ]\n },\n "ZOOKEEPER_SERVER": {\n "STACK-SELECT-PACKAGE": "zookeeper-server",\n "INSTALL": [\n "zookeeper-server"\n ],\n "PATCH": [\n "zookeeper-server"\n ],\n "STANDARD": [\n "zookeeper-server"\n ]\n }\n }\n },\n "conf-select": {\n "accumulo": [\n {\n "conf_dir": "/etc/accumulo/conf",\n "current_dir": "{0}/current/accumulo-client/conf"\n }\n ],\n "atlas": [\n {\n "conf_dir": "/etc/atlas/conf",\n "current_dir": "{0}/current/atlas-client/conf"\n }\n ],\n "druid": [\n {\n "conf_dir": "/etc/druid/conf",\n "current_dir": "{0}/current/druid-overlord/conf"\n }\n ],\n "hadoop": [\n {\n "conf_dir": "/etc/hadoop/conf",\n "current_dir": "{0}/current/hadoop-client/conf"\n }\n ],\n "hbase": [\n {\n "conf_dir": "/etc/hbase/conf",\n "current_dir": "{0}/current/hbase-client/conf"\n }\n ],\n "hive": [\n {\n "conf_dir": "/etc/hive/conf",\n "current_dir": "{0}/current/hive-client/conf"\n }\n ],\n "hive2": [\n {\n "conf_dir": "/etc/hive2/conf",\n "current_dir": "{0}/current/hive-server2-hive/conf"\n }\n ],\n "hive-hcatalog": [\n {\n "conf_dir": "/etc/hive-webhcat/conf",\n "prefix": "/etc/hive-webhcat",\n "current_dir": "{0}/current/hive-webhcat/etc/webhcat"\n },\n {\n "conf_dir": "/etc/hive-hcatalog/conf",\n "prefix": "/etc/hive-hcatalog",\n "current_dir": "{0}/current/hive-webhcat/etc/hcatalog"\n }\n ],\n "kafka": [\n {\n "conf_dir": "/etc/kafka/conf",\n "current_dir": "{0}/current/kafka-broker/conf"\n }\n ],\n "knox": [\n {\n "conf_dir": "/etc/knox/conf",\n "current_dir": "{0}/current/knox-server/conf"\n }\n ],\n "livy2": [\n {\n "conf_dir": "/etc/livy2/conf",\n "current_dir": "{0}/current/livy2-client/conf"\n }\n ],\n "nifi": [\n {\n "conf_dir": "/etc/nifi/conf",\n "current_dir": "{0}/current/nifi/conf"\n }\n ],\n "oozie": [\n {\n "conf_dir": "/etc/oozie/conf",\n "current_dir": "{0}/current/oozie-client/conf"\n }\n ],\n "phoenix": [\n {\n "conf_dir": "/etc/phoenix/conf",\n "current_dir": "{0}/current/phoenix-client/conf"\n }\n ],\n "pig": [\n {\n "conf_dir": "/etc/pig/conf",\n "current_dir": "{0}/current/pig-client/conf"\n }\n ],\n "ranger-admin": [\n {\n "conf_dir": "/etc/ranger/admin/conf",\n "current_dir": "{0}/current/ranger-admin/conf"\n }\n ],\n "ranger-kms": [\n {\n "conf_dir": "/etc/ranger/kms/conf",\n "current_dir": "{0}/current/ranger-kms/conf"\n }\n ],\n "ranger-tagsync": [\n {\n "conf_dir": "/etc/ranger/tagsync/conf",\n "current_dir": "{0}/current/ranger-tagsync/conf"\n }\n ],\n "ranger-usersync": [\n {\n "conf_dir": "/etc/ranger/usersync/conf",\n "current_dir": "{0}/current/ranger-usersync/conf"\n }\n ],\n "spark2": [\n {\n "conf_dir": "/etc/spark2/conf",\n "current_dir": "{0}/current/spark2-client/conf"\n }\n ],\n "sqoop": [\n {\n "conf_dir": "/etc/sqoop/conf",\n "current_dir": "{0}/current/sqoop-client/conf"\n }\n ],\n "storm": [\n {\n "conf_dir": "/etc/storm/conf",\n "current_dir": "{0}/current/storm-client/conf"\n }\n ],\n "superset": [\n {\n "conf_dir": "/etc/superset/conf",\n "current_dir": "{0}/current/superset/conf"\n }\n ],\n "tez": [\n {\n "conf_dir": "/etc/tez/conf",\n "current_dir": "{0}/current/tez-client/conf"\n }\n ],\n "zeppelin": [\n {\n "conf_dir": "/etc/zeppelin/conf",\n "current_dir": "{0}/current/zeppelin-server/conf"\n }\n ],\n "zookeeper": [\n {\n "conf_dir": "/etc/zookeeper/conf",\n "current_dir": "{0}/current/zookeeper-client/conf"\n }\n ]\n },\n "conf-select-patching": {\n "ACCUMULO": {\n "packages": ["accumulo"]\n },\n "ATLAS": {\n "packages": ["atlas"]\n },\n "DRUID": {\n "packages": ["druid"]\n },\n "FLUME": {\n "packages": ["flume"]\n },\n "HBASE": {\n "packages": ["hbase"]\n },\n "HDFS": {\n "packages": []\n },\n "HIVE": {\n "packages": ["hive", "hive-hcatalog", "hive2", "tez_hive2"]\n },\n "KAFKA": {\n "packages": ["kafka"]\n },\n "KNOX": {\n "packages": ["knox"]\n },\n "MAPREDUCE2": {\n "packages": []\n },\n "OOZIE": {\n "packages": ["oozie"]\n },\n "PIG": {\n "packages": ["pig"]\n },\n "R4ML": {\n "packages": []\n },\n "RANGER": {\n "packages": ["ranger-admin", "ranger-usersync", "ranger-tagsync"]\n },\n "RANGER_KMS": {\n "packages": ["ranger-kms"]\n },\n "SPARK2": {\n "packages": ["spark2", "livy2"]\n },\n "SQOOP": {\n "packages": ["sqoop"]\n },\n "STORM": {\n "packages": ["storm", "storm-slider-client"]\n },\n "SUPERSET": {\n "packages": ["superset"]\n },\n "SYSTEMML": {\n "packages": []\n },\n "TEZ": {\n "packages": ["tez"]\n },\n "TITAN": {\n "packages": []\n },\n "YARN": {\n "packages": []\n },\n "ZEPPELIN": {\n "packages": ["zeppelin"]\n },\n "ZOOKEEPER": {\n "packages": ["zookeeper"]\n }\n },\n "upgrade-dependencies" : {\n "ATLAS": ["STORM"],\n "HIVE": ["TEZ", "MAPREDUCE2", "SQOOP"],\n "TEZ": ["HIVE"],\n "MAPREDUCE2": ["HIVE"],\n "OOZIE": ["MAPREDUCE2"]\n }\n }\n}', u'ignore_groupsusers_create': u'false', u'alerts_repeat_tolerance': u'1', u'namenode_rolling_restart_timeout': u'4200', u'kerberos_domain': u'EXAMPLE.COM', u'manage_dirs_on_root': u'true', u'recovery_lifetime_max_count': u'1024', u'recovery_type': u'AUTO_START', u'stack_features': u'{\n "HDP": {\n "stack_features": [\n {\n "name": "snappy",\n "description": "Snappy compressor/decompressor support",\n "min_version": "2.0.0.0",\n "max_version": "2.2.0.0"\n },\n {\n "name": "lzo",\n "description": "LZO libraries support",\n "min_version": "2.2.1.0"\n },\n {\n "name": "express_upgrade",\n "description": "Express upgrade support",\n "min_version": "2.1.0.0"\n },\n {\n "name": "rolling_upgrade",\n "description": "Rolling upgrade support",\n "min_version": "2.2.0.0"\n },\n {\n "name": "kafka_acl_migration_support",\n "description": "ACL migration support",\n "min_version": "2.3.4.0"\n },\n {\n "name": "secure_zookeeper",\n "description": "Protect ZNodes with SASL acl in secure clusters",\n "min_version": "2.6.0.0"\n },\n {\n "name": "config_versioning",\n "description": "Configurable versions support",\n "min_version": "2.3.0.0"\n },\n {\n "name": "datanode_non_root",\n "description": "DataNode running as non-root support (AMBARI-7615)",\n "min_version": "2.2.0.0"\n },\n {\n "name": "remove_ranger_hdfs_plugin_env",\n "description": "HDFS removes Ranger env files (AMBARI-14299)",\n "min_version": "2.3.0.0"\n },\n {\n "name": "ranger",\n "description": "Ranger Service support",\n "min_version": "2.2.0.0"\n },\n {\n "name": "ranger_tagsync_component",\n "description": "Ranger Tagsync component support (AMBARI-14383)",\n "min_version": "2.5.0.0"\n },\n {\n "name": "phoenix",\n "description": "Phoenix Service support",\n "min_version": "2.3.0.0"\n },\n {\n "name": "nfs",\n "description": "NFS support",\n "min_version": "2.3.0.0"\n },\n {\n "name": "tez_for_spark",\n "description": "Tez dependency for Spark",\n "min_version": "2.2.0.0",\n "max_version": "2.3.0.0"\n },\n {\n "name": "timeline_state_store",\n "description": "Yarn application timeline-service supports state store property (AMBARI-11442)",\n "min_version": "2.2.0.0"\n },\n {\n "name": "copy_tarball_to_hdfs",\n "description": "Copy tarball to HDFS support (AMBARI-12113)",\n "min_version": "2.2.0.0"\n },\n {\n "name": "spark_16plus",\n "description": "Spark 1.6+",\n "min_version": "2.4.0.0"\n },\n {\n "name": "spark_thriftserver",\n "description": "Spark Thrift Server",\n "min_version": "2.3.2.0"\n },\n {\n "name": "storm_ams",\n "description": "Storm AMS integration (AMBARI-10710)",\n "min_version": "2.2.0.0"\n },\n {\n "name": "kafka_listeners",\n "description": "Kafka listeners (AMBARI-10984)",\n "min_version": "2.3.0.0"\n },\n {\n "name": "kafka_kerberos",\n "description": "Kafka Kerberos support (AMBARI-10984)",\n "min_version": "2.3.0.0"\n },\n {\n "name": "pig_on_tez",\n "description": "Pig on Tez support (AMBARI-7863)",\n "min_version": "2.2.0.0"\n },\n {\n "name": "ranger_usersync_non_root",\n "description": "Ranger Usersync as non-root user (AMBARI-10416)",\n "min_version": "2.3.0.0"\n },\n {\n "name": "ranger_audit_db_support",\n "description": "Ranger Audit to DB support",\n "min_version": "2.2.0.0",\n "max_version": "2.4.99.99"\n },\n {\n "name": "accumulo_kerberos_user_auth",\n "description": "Accumulo Kerberos User Auth (AMBARI-10163)",\n "min_version": "2.3.0.0"\n },\n {\n "name": "knox_versioned_data_dir",\n "description": "Use versioned data dir for Knox (AMBARI-13164)",\n "min_version": "2.3.2.0"\n },\n {\n "name": "knox_sso_topology",\n "description": "Knox SSO Topology support (AMBARI-13975)",\n "min_version": "2.3.8.0"\n },\n {\n "name": "atlas_rolling_upgrade",\n "description": "Rolling upgrade support for Atlas",\n "min_version": "2.3.0.0"\n },\n {\n "name": "oozie_admin_user",\n "description": "Oozie install user as an Oozie admin user (AMBARI-7976)",\n "min_version": "2.2.0.0"\n },\n {\n "name": "oozie_create_hive_tez_configs",\n "description": "Oozie create configs for Ambari Hive and Tez deployments (AMBARI-8074)",\n "min_version": "2.2.0.0"\n },\n {\n "name": "oozie_setup_shared_lib",\n "description": "Oozie setup tools used to shared Oozie lib to HDFS (AMBARI-7240)",\n "min_version": "2.2.0.0"\n },\n {\n "name": "oozie_host_kerberos",\n "description": "Oozie in secured clusters uses _HOST in Kerberos principal (AMBARI-9775)",\n "min_version": "2.0.0.0"\n },\n {\n "name": "falcon_extensions",\n "description": "Falcon Extension",\n "min_version": "2.5.0.0"\n },\n {\n "name": "hive_metastore_upgrade_schema",\n "description": "Hive metastore upgrade schema support (AMBARI-11176)",\n "min_version": "2.3.0.0"\n },\n {\n "name": "hive_server_interactive",\n "description": "Hive server interactive support (AMBARI-15573)",\n "min_version": "2.5.0.0"\n },\n {\n "name": "hive_purge_table",\n "description": "Hive purge table support (AMBARI-12260)",\n "min_version": "2.3.0.0"\n },\n {\n "name": "hive_server2_kerberized_env",\n "description": "Hive server2 working on kerberized environment (AMBARI-13749)",\n "min_version": "2.2.3.0",\n "max_version": "2.2.5.0"\n },\n {\n "name": "hive_env_heapsize",\n "description": "Hive heapsize property defined in hive-env (AMBARI-12801)",\n "min_version": "2.2.0.0"\n },\n {\n "name": "ranger_kms_hsm_support",\n "description": "Ranger KMS HSM support (AMBARI-15752)",\n "min_version": "2.5.0.0"\n },\n {\n "name": "ranger_log4j_support",\n "description": "Ranger supporting log-4j properties (AMBARI-15681)",\n "min_version": "2.5.0.0"\n },\n {\n "name": "ranger_kerberos_support",\n "description": "Ranger Kerberos support",\n "min_version": "2.5.0.0"\n },\n {\n "name": "hive_metastore_site_support",\n "description": "Hive Metastore site support",\n "min_version": "2.5.0.0"\n },\n {\n "name": "ranger_usersync_password_jceks",\n "description": "Saving Ranger Usersync credentials in jceks",\n "min_version": "2.5.0.0"\n },\n {\n "name": "ranger_install_infra_client",\n "description": "Ambari Infra Service support",\n "min_version": "2.5.0.0"\n },\n {\n "name": "falcon_atlas_support_2_3",\n "description": "Falcon Atlas integration support for 2.3 stack",\n "min_version": "2.3.99.0",\n "max_version": "2.4.0.0"\n },\n {\n "name": "falcon_atlas_support",\n "description": "Falcon Atlas integration",\n "min_version": "2.5.0.0"\n },\n {\n "name": "hbase_home_directory",\n "description": "Hbase home directory in HDFS needed for HBASE backup",\n "min_version": "2.5.0.0"\n },\n {\n "name": "spark_livy",\n "description": "Livy as slave component of spark",\n "min_version": "2.5.0.0"\n },\n {\n "name": "spark_livy2",\n "description": "Livy2 as slave component of Spark2",\n "min_version": "2.6.0.0"\n },\n {\n "name": "atlas_ranger_plugin_support",\n "description": "Atlas Ranger plugin support",\n "min_version": "2.5.0.0"\n },\n {\n "name": "atlas_conf_dir_in_path",\n "description": "Prepend the Atlas conf dir (/etc/atlas/conf) to the classpath of Storm and Falcon",\n "min_version": "2.3.0.0",\n "max_version": "2.4.99.99"\n },\n {\n "name": "atlas_upgrade_support",\n "description": "Atlas supports express and rolling upgrades",\n "min_version": "2.5.0.0"\n },\n {\n "name": "atlas_hook_support",\n "description": "Atlas support for hooks in Hive, Storm, Falcon, and Sqoop",\n "min_version": "2.5.0.0"\n },\n {\n "name": "ranger_pid_support",\n "description": "Ranger Service support pid generation AMBARI-16756",\n "min_version": "2.5.0.0"\n },\n {\n "name": "ranger_kms_pid_support",\n "description": "Ranger KMS Service support pid generation",\n "min_version": "2.5.0.0"\n },\n {\n "name": "ranger_admin_password_change",\n "description": "Allow ranger admin credentials to be specified during cluster creation (AMBARI-17000)",\n "min_version": "2.5.0.0"\n },\n {\n "name": "ranger_setup_db_on_start",\n "description": "Allows setup of ranger db and java patches to be called multiple times on each START",\n "min_version": "2.6.0.0"\n },\n {\n "name": "storm_metrics_apache_classes",\n "description": "Metrics sink for Storm that uses Apache class names",\n "min_version": "2.5.0.0"\n },\n {\n "name": "spark_java_opts_support",\n "description": "Allow Spark to generate java-opts file",\n "min_version": "2.2.0.0",\n "max_version": "2.4.0.0"\n },\n {\n "name": "atlas_hbase_setup",\n "description": "Use script to create Atlas tables in Hbase and set permissions for Atlas user.",\n "min_version": "2.5.0.0"\n },\n {\n "name": "ranger_hive_plugin_jdbc_url",\n "description": "Handle Ranger hive repo config jdbc url change for stack 2.5 (AMBARI-18386)",\n "min_version": "2.5.0.0"\n },\n {\n "name": "zkfc_version_advertised",\n "description": "ZKFC advertise version",\n "min_version": "2.5.0.0"\n },\n {\n "name": "phoenix_core_hdfs_site_required",\n "description": "HDFS and CORE site required for Phoenix",\n "max_version": "2.5.9.9"\n },\n {\n "name": "ranger_tagsync_ssl_xml_support",\n "description": "Ranger Tagsync ssl xml support.",\n "min_version": "2.6.0.0"\n },\n {\n "name": "ranger_xml_configuration",\n "description": "Ranger code base support xml configurations",\n "min_version": "2.3.0.0"\n },\n {\n "name": "kafka_ranger_plugin_support",\n "description": "Ambari stack changes for Ranger Kafka Plugin (AMBARI-11299)",\n "min_version": "2.3.0.0"\n },\n {\n "name": "yarn_ranger_plugin_support",\n "description": "Implement Stack changes for Ranger Yarn Plugin integration (AMBARI-10866)",\n "min_version": "2.3.0.0"\n },\n {\n "name": "ranger_solr_config_support",\n "description": "Showing Ranger solrconfig.xml on UI",\n "min_version": "2.6.0.0"\n },\n {\n "name": "hive_interactive_atlas_hook_required",\n "description": "Registering Atlas Hook for Hive Interactive.",\n "min_version": "2.6.0.0"\n },\n {\n "name": "atlas_install_hook_package_support",\n "description": "Stop installing packages from 2.6",\n "max_version": "2.5.9.9"\n },\n {\n "name": "atlas_hdfs_site_on_namenode_ha",\n "description": "Need to create hdfs-site under atlas-conf dir when Namenode-HA is enabled.",\n "min_version": "2.6.0.0"\n },\n {\n "name": "core_site_for_ranger_plugins",\n "description": "Adding core-site.xml in when Ranger plugin is enabled for Storm, Kafka, and Knox.",\n "min_version": "2.6.0.0"\n },\n {\n "name": "secure_ranger_ssl_password",\n "description": "Securing Ranger Admin and Usersync SSL and Trustore related passwords in jceks",\n "min_version": "2.6.0.0"\n },\n {\n "name": "ranger_kms_ssl",\n "description": "Ranger KMS SSL properties in ambari stack",\n "min_version": "2.6.0.0"\n },\n {\n "name": "atlas_hdfs_site_on_namenode_ha",\n "description": "Need to create hdfs-site under atlas-conf dir when Namenode-HA is enabled.",\n "min_version": "2.6.0.0"\n },\n {\n "name": "atlas_core_site_support",\n "description": "Need to create core-site under Atlas conf directory.",\n "min_version": "2.6.0.0"\n },\n {\n "name": "toolkit_config_update",\n "description": "Support separate input and output for toolkit configuration",\n "min_version": "2.6.0.0"\n },\n {\n "name": "nifi_encrypt_config",\n "description": "Encrypt sensitive properties written to nifi property file",\n "min_version": "2.6.0.0"\n },\n {\n "name": "tls_toolkit_san",\n "description": "Support subject alternative name flag",\n "min_version": "2.6.0.0"\n },\n {\n "name": "admin_toolkit_support",\n "description": "Supports the nifi admin toolkit",\n "min_version": "2.6.0.0"\n },\n {\n "name": "nifi_jaas_conf_create",\n "description": "Create NIFI jaas configuration when kerberos is enabled",\n "min_version": "2.6.0.0"\n },\n {\n "name": "registry_remove_rootpath",\n "description": "Registry remove root path setting",\n "min_version": "2.6.3.0"\n },\n {\n "name": "nifi_encrypted_authorizers_config",\n "description": "Support encrypted authorizers.xml configuration for version 3.1 onwards",\n "min_version": "2.6.5.0"\n },\n {\n "name": "multiple_env_sh_files_support",\n "description": "This feature is supported by RANGER and RANGER_KMS service to remove multiple env sh files during upgrade to stack 3.0",\n "max_version": "2.6.99.99"\n },\n {\n "name": "registry_allowed_resources_support",\n "description": "Registry allowed resources",\n "min_version": "3.0.0.0"\n },\n {\n "name": "registry_rewriteuri_filter_support",\n "description": "Registry RewriteUri servlet filter",\n "min_version": "3.0.0.0"\n },\n {\n "name": "registry_support_schema_migrate",\n "description": "Support schema migrate in registry for version 3.1 onwards",\n "min_version": "3.0.0.0"\n },\n {\n "name": "sam_support_schema_migrate",\n "description": "Support schema migrate in SAM for version 3.1 onwards",\n "min_version": "3.0.0.0"\n },\n {\n "name": "sam_storage_core_in_registry",\n "description": "Storage core module moved to registry",\n "min_version": "3.0.0.0"\n },\n {\n "name": "sam_db_file_storage",\n "description": "DB based file storage in SAM",\n "min_version": "3.0.0.0"\n },\n {\n "name": "kafka_extended_sasl_support",\n "description": "Support SASL PLAIN and GSSAPI",\n "min_version": "3.0.0.0"\n },\n {\n "name": "registry_support_db_user_creation",\n "description": "Supports registry\'s database and user creation on the fly",\n "min_version": "3.0.0.0"\n },\n {\n "name": "streamline_support_db_user_creation",\n "description": "Supports Streamline\'s database and user creation on the fly",\n "min_version": "3.0.0.0"\n },\n {\n "name": "nifi_auto_client_registration",\n "description": "Supports NiFi\'s client registration in runtime",\n "min_version": "3.0.0.0"\n }\n ]\n }\n}', u'smokeuser': u'ambari-qa', u'ignore_bad_mounts': u'false', u'recovery_window_in_minutes': u'60', u'sysprep_skip_setup_jce': u'false', u'user_group': u'hadoop', u'namenode_rolling_restart_safemode_exit_timeout': u'3600', u'stack_tools': u'{\n "HDP": {\n "stack_selector": [\n "hdp-select",\n "/usr/bin/hdp-select",\n "hdp-select"\n ],\n "conf_selector": [\n "conf-select",\n "/usr/bin/conf-select",\n "conf-select"\n ]\n }\n}', u'recovery_retry_interval': u'5', u'sysprep_skip_copy_oozie_share_lib_to_hdfs': u'false', u'sysprep_skip_copy_tarballs_hdfs': u'false', u'smokeuser_principal_name': u'ambari-qa-{clustername}@PLATFORM.{clustername}.DE', u'recovery_max_count': u'6', u'stack_root': u'{"HDP":"/usr/hdp"}', u'sysprep_skip_create_users_and_groups': u'false', u'repo_suse_rhel_template': u'[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', u'ambari_principal_name': u'ambari-server-{clustername}@PLATFORM.{clustername}.DE', u'smokeuser_keytab': u'/etc/security/keytabs/smokeuser.headless.keytab', u'managed_hdfs_resource_property_names': u'', u'recovery_enabled': u'true', u'sysprep_skip_copy_fast_jar_hdfs': u'false'}, u'mapred-site': {u'mapreduce.jobhistory.admin.acl': u'*', u'mapreduce.jobhistory.address': u'{host}:10020', u'mapreduce.cluster.administrators': u' hadoop', u'mapreduce.reduce.input.buffer.percent': u'0.0', u'mapreduce.output.fileoutputformat.compress': u'false', u'mapreduce.job.counters.max': u'130', u'mapreduce.framework.name': u'yarn', u'mapreduce.jobhistory.recovery.store.leveldb.path': u'/hadoop/mapreduce/jhs', u'mapreduce.reduce.shuffle.merge.percent': u'0.66', u'yarn.app.mapreduce.am.resource.mb': u'27648', u'mapreduce.map.java.opts': u'-Xmx22118m', u'mapred.local.dir': u'/hadoop/mapred', u'mapreduce.application.classpath': u'$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure', u'mapreduce.job.acl-modify-job': u' ', u'mapreduce.jobhistory.http.policy': u'HTTP_ONLY', u'mapreduce.output.fileoutputformat.compress.type': u'BLOCK', u'mapreduce.reduce.speculative': u'false', u'mapreduce.reduce.java.opts': u'-Xmx22118m', u'mapreduce.am.max-attempts': u'2', u'yarn.app.mapreduce.am.admin-command-opts': u'-Dhdp.version=${hdp.version}', u'mapreduce.reduce.log.level': u'INFO', u'mapreduce.map.sort.spill.percent': u'0.7', u'mapreduce.job.emit-timeline-data': u'true', u'mapreduce.application.framework.path': u'/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework', u'mapreduce.task.timeout': u'300000', u'mapreduce.map.memory.mb': u'27648', u'mapreduce.job.queuename': u'default', u'mapreduce.job.acl-view-job': u' ', u'mapreduce.jobhistory.intermediate-done-dir': u'/mr-history/tmp', u'mapreduce.reduce.memory.mb': u'27648', u'mapreduce.jobhistory.recovery.enable': u'true', u'yarn.app.mapreduce.am.log.level': u'INFO', u'mapreduce.map.log.level': u'INFO', u'mapreduce.shuffle.port': u'13562', u'mapreduce.map.speculative': u'false', u'mapreduce.reduce.shuffle.fetch.retry.timeout-ms': u'30000', u'mapreduce.admin.user.env': u'LD_LIBRARY_PATH=/usr/hdp/${hdp.version}/hadoop/lib/native:/usr/hdp/${hdp.version}/hadoop/lib/native/Linux-{{architecture}}-64', u'mapreduce.jobhistory.recovery.store.class': u'org.apache.hadoop.mapreduce.v2.hs.HistoryServerLeveldbStateStoreService', u'mapreduce.task.io.sort.factor': u'100', u'mapreduce.map.output.compress': u'false', u'mapreduce.job.reduce.slowstart.completedmaps': u'0.05', u'mapreduce.cluster.acls.enabled': u'false', u'mapreduce.jobhistory.webapp.address': u'{host}:19888', u'mapreduce.reduce.shuffle.parallelcopies': u'30', u'mapreduce.reduce.shuffle.input.buffer.percent': u'0.7', u'yarn.app.mapreduce.am.staging-dir': u'/user', u'mapreduce.jobhistory.done-dir': u'/mr-history/done', u'mapreduce.admin.reduce.child.java.opts': u'-server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}', u'mapreduce.reduce.shuffle.fetch.retry.enabled': u'1', u'mapreduce.task.io.sort.mb': u'2047', u'yarn.app.mapreduce.am.command-opts': u'-Xmx22118m -Dhdp.version=${hdp.version}', u'mapreduce.reduce.shuffle.fetch.retry.interval-ms': u'1000', u'mapreduce.jobhistory.bind-host': u'0.0.0.0', u'mapreduce.admin.map.child.java.opts': u'-server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}'}, u'ranger-yarn-plugin-properties': {}, u'ams-hbase-site': {u'hbase.master.info.bindAddress': u'0.0.0.0', u'hbase.normalizer.enabled': u'false', u'phoenix.mutate.batchSize': u'10000', u'hbase.zookeeper.property.dataDir': u'${hbase.tmp.dir}/zookeeper', u'phoenix.query.keepAliveMs': u'300000', u'hbase.rootdir': u'file:///var/lib/ambari-metrics-collector/hbase', u'hbase.replication': u'false', u'dfs.client.read.shortcircuit': u'true', u'hbase.regionserver.global.memstore.lowerLimit': u'0.3', u'hbase.hregion.memstore.block.multiplier': u'4', u'hbase.hregion.memstore.flush.size': u'134217728', u'hbase.zookeeper.property.clientPort': u'{{zookeeper_clientPort}}', u'hbase.unsafe.stream.capability.enforce': u'false', u'phoenix.spool.directory': u'${hbase.tmp.dir}/phoenix-spool', u'phoenix.query.rowKeyOrderSaltedTable': u'true', u'hbase.client.scanner.timeout.period': u'300000', u'phoenix.groupby.maxCacheSize': u'307200000', u'hbase.normalizer.period': u'600000', u'hbase.snapshot.enabled': u'false', u'hbase.master.port': u'61300', u'hbase.master.wait.on.regionservers.mintostart': u'1', u'hbase.regionserver.global.memstore.upperLimit': u'0.35', u'phoenix.query.spoolThresholdBytes': u'20971520', u'zookeeper.session.timeout': u'120000', u'hbase.tmp.dir': u'/var/lib/ambari-metrics-collector/hbase-tmp', u'hbase.hregion.max.filesize': u'4294967296', u'hbase.rpc.timeout': u'300000', u'hfile.block.cache.size': u'0.3', u'hbase.regionserver.port': u'61320', u'hbase.regionserver.thread.compaction.small': u'3', u'hbase.master.info.port': u'61310', u'phoenix.coprocessor.maxMetaDataCacheSize': u'20480000', u'phoenix.query.maxGlobalMemoryPercentage': u'15', u'hbase.zookeeper.quorum': u'{{zookeeper_quorum_hosts}}', u'hbase.regionserver.info.port': u'61330', u'zookeeper.znode.parent': u'/ams-hbase-unsecure', u'hbase.hstore.blockingStoreFiles': u'200', u'hbase.hregion.majorcompaction': u'0', u'hbase.zookeeper.leaderport': u'61388', u'hbase.hstore.flusher.count': u'2', u'hbase.master.normalizer.class': u'org.apache.hadoop.hbase.master.normalizer.SimpleRegionNormalizer', u'hbase.regionserver.thread.compaction.large': u'2', u'phoenix.query.timeoutMs': u'300000', u'hbase.local.dir': u'${hbase.tmp.dir}/local', u'hbase.cluster.distributed': u'false', u'zookeeper.session.timeout.localHBaseCluster': u'120000', u'hbase.client.scanner.caching': u'10000', u'phoenix.sequence.saltBuckets': u'2', u'phoenix.coprocessor.maxServerCacheTimeToLiveMs': u'60000', u'hbase.zookeeper.property.tickTime': u'6000', u'hbase.zookeeper.peerport': u'61288'}, u'ssl-client': {u'ssl.client.truststore.reload.interval': u'10000', u'ssl.client.keystore.password': u'', u'ssl.client.truststore.type': u'jks', u'ssl.client.keystore.location': u'', u'ssl.client.truststore.location': u'', u'ssl.client.truststore.password': u'', u'ssl.client.keystore.type': u'jks'}, u'hivemetastore-site': {u'hive.compactor.worker.threads': u'5', u'hive.service.metrics.hadoop2.component': u'hivemetastore', u'hive.compactor.initiator.on': u'true', u'hive.metastore.dml.events': u'true', u'hive.metastore.transactional.event.listeners': u'org.apache.hive.hcatalog.listener.DbNotificationListener', u'hive.server2.metrics.enabled': u'true', u'hive.metastore.event.listeners': u'', u'hive.metastore.metrics.enabled': u'true', u'hive.service.metrics.reporter': u'HADOOP2'}, u'product-info': {u'product-info-content': u'\n{\n "schemaVersion" : "1.0.0",\n "productId": "{{stackName}}",\n "componentId": "{{stackName}}",\n "productVersion" : "{{stackVersion}}",\n "type":"cluster",\n "instanceInfo": {\n "guid": "",\n "parentGuid": "",\n "name":"{{clusterName}}",\n "flexSubscriptionId": "{{flexSubscriptionId}}",\n "provider": "",\n "region": ""\n }\n}'}, u'ams-site': {u'timeline.metrics.host.aggregator.minute.ttl': u'604800', u'timeline.metrics.cluster.aggregator.minute.checkpointCutOffMultiplier': u'2', u'cluster.zookeeper.property.clientPort': u'{{cluster_zookeeper_clientPort}}', u'timeline.metrics.cluster.aggregator.hourly.disabled': u'false', u'timeline.metrics.cluster.aggregator.second.timeslice.interval': u'30', u'timeline.metrics.service.http.policy': u'HTTP_ONLY', u'timeline.metrics.cluster.aggregator.minute.ttl': u'2592000', u'timeline.metrics.host.aggregator.minute.interval': u'300', u'failover.strategy': u'round-robin', u'timeline.metrics.cluster.aggregator.daily.interval': u'86400', u'timeline.metrics.cluster.aggregator.hourly.ttl': u'31536000', u'timeline.metrics.transient.metric.patterns': u'topology\\.%', u'timeline.metrics.cluster.aggregator.interpolation.enabled': u'true', u'timeline.metrics.host.inmemory.aggregation.port': u'61888', u'timeline.metrics.downsampler.topn.value': u'10', u'timeline.metrics.host.aggregator.daily.disabled': u'false', u'timeline.metrics.service.watcher.timeout': u'30', u'timeline.metrics.downsampler.topn.function': u'max', u'timeline.metrics.daily.aggregator.minute.interval': u'86400', u'timeline.metrics.host.inmemory.aggregation': u'false', u'timeline.metrics.hbase.compression.scheme': u'SNAPPY', u'timeline.metrics.cluster.aggregator.hourly.interval': u'3600', u'timeline.metrics.aggregators.skip.blockcache.enabled': u'false', u'phoenix.spool.directory': u'/tmp', u'timeline.metrics.host.aggregator.ttl': u'86400', u'timeline.metrics.sink.report.interval': u'60', u'timeline.metrics.service.use.groupBy.aggregators': u'true', u'cluster.zookeeper.quorum': u'{{cluster_zookeeper_quorum_hosts}}', u'timeline.metrics.cluster.aggregator.daily.checkpointCutOffMultiplier': u'2', u'timeline.metrics.cluster.aggregation.sql.filters': u'sdisk\\_%,boottime', u'timeline.metrics.service.webapp.address': u'0.0.0.0:6188', u'timeline.metrics.cluster.aggregator.daily.ttl': u'63072000', u'timeline.metrics.whitelisting.enabled': u'false', u'timeline.metrics.aggregator.checkpoint.dir': u'/var/lib/ambari-metrics-collector/checkpoint', u'timeline.metrics.hbase.data.block.encoding': u'FAST_DIFF', u'timeline.metrics.host.aggregator.minute.disabled': u'false', u'timeline.metrics.cluster.aggregator.second.ttl': u'259200', u'timeline.metrics.service.cluster.aggregator.appIds': u'datanode,nodemanager,hbase', u'timeline.metrics.downsampler.topn.metric.patterns': u'dfs.NNTopUserOpCounts.windowMs=60000.op=__%.user=%,dfs.NNTopUserOpCounts.windowMs=300000.op=__%.user=%,dfs.NNTopUserOpCounts.windowMs=1500000.op=__%.user=%', u'timeline.metrics.service.watcher.delay': u'30', u'timeline.metrics.cache.enabled': u'true', u'timeline.metrics.service.handler.thread.count': u'20', u'timeline.metrics.cluster.aggregator.minute.disabled': u'false', u'timeline.metrics.cluster.aggregator.minute.interval': u'300', u'timeline.metrics.cache.size': u'300', u'phoenix.query.maxGlobalMemoryPercentage': u'25', u'timeline.metrics.service.operation.mode': u'embedded', u'timeline.metrics.host.aggregator.minute.checkpointCutOffMultiplier': u'2', u'timeline.metrics.host.aggregator.hourly.disabled': u'false', u'timeline.metrics.hbase.init.check.enabled': u'true', u'timeline.metrics.host.aggregator.hourly.checkpointCutOffMultiplier': u'2', u'timeline.metrics.cluster.aggregator.daily.disabled': u'false', u'timeline.metrics.cluster.aggregator.second.disabled': u'false', u'timeline.metrics.service.rpc.address': u'0.0.0.0:60200', u'timeline.metrics.host.aggregator.hourly.ttl': u'2592000', u'timeline.metrics.downsampler.event.metric.patterns': u'topology\\.%', u'timeline.metrics.service.resultset.fetchSize': u'2000', u'timeline.metrics.service.watcher.initial.delay': u'600', u'timeline.metrics.cluster.aggregator.hourly.checkpointCutOffMultiplier': u'2', u'timeline.metrics.cache.commit.interval': u'9', u'timeline.metrics.service.default.result.limit': u'5760', u'timeline.metrics.service.checkpointDelay': u'60', u'timeline.metrics.host.aggregator.daily.ttl': u'31536000', u'timeline.metrics.cluster.aggregator.second.checkpointCutOffMultiplier': u'2', u'timeline.metrics.host.aggregator.daily.checkpointCutOffMultiplier': u'2', u'timeline.metrics.service.watcher.disabled': u'false', u'timeline.metrics.cluster.aggregator.second.interval': u'120', u'timeline.metrics.host.inmemory.aggregation.http.policy': u'HTTP_ONLY', u'timeline.metrics.host.aggregator.hourly.interval': u'3600', u'timeline.metrics.service.metadata.filters': u'ContainerResource'}, u'ams-hbase-policy': {u'security.admin.protocol.acl': u'*', u'security.masterregion.protocol.acl': u'*', u'security.client.protocol.acl': u'*'}, u'hadoop-policy': {u'security.job.client.protocol.acl': u'*', u'security.job.task.protocol.acl': u'*', u'security.datanode.protocol.acl': u'*', u'security.namenode.protocol.acl': u'*', u'security.client.datanode.protocol.acl': u'*', u'security.inter.tracker.protocol.acl': u'*', u'security.refresh.usertogroups.mappings.protocol.acl': u'hadoop', u'security.client.protocol.acl': u'*', u'security.refresh.policy.protocol.acl': u'hadoop', u'security.admin.operations.protocol.acl': u'hadoop', u'security.inter.datanode.protocol.acl': u'*'}, u'spark2-env': {u'spark_pid_dir': u'/var/run/spark2', u'spark_daemon_memory': u'1024', u'hive_kerberos_keytab': u'{{hive_kerberos_keytab}}', u'spark_user': u'spark', u'content': u'\n#!/usr/bin/env bash\n\n# This file is sourced when running various Spark programs.\n# Copy it as spark-env.sh and edit that to configure Spark for your site.\n\n# Options read in YARN client mode\n#SPARK_EXECUTOR_INSTANCES="2" #Number of workers to start (Default: 2)\n#SPARK_EXECUTOR_CORES="1" #Number of cores for the workers (Default: 1).\n#SPARK_EXECUTOR_MEMORY="1G" #Memory per Worker (e.g. 1000M, 2G) (Default: 1G)\n#SPARK_DRIVER_MEMORY="512M" #Memory for Master (e.g. 1000M, 2G) (Default: 512 Mb)\n#SPARK_YARN_APP_NAME="spark" #The name of your application (Default: Spark)\n#SPARK_YARN_QUEUE="default" #The hadoop queue to use for allocation requests (Default: default)\n#SPARK_YARN_DIST_FILES="" #Comma separated list of files to be distributed with the job.\n#SPARK_YARN_DIST_ARCHIVES="" #Comma separated list of archives to be distributed with the job.\n\n{% if security_enabled %}\nexport SPARK_HISTORY_OPTS=\'-Dspark.ui.filters=org.apache.hadoop.security.authentication.server.AuthenticationFilter -Dspark.org.apache.hadoop.security.authentication.server.AuthenticationFilter.params="type=kerberos,kerberos.principal={{spnego_principal}},kerberos.keytab={{spnego_keytab}}"\'\n{% endif %}\n\n\n# Generic options for the daemons used in the standalone deploy mode\n\n# Alternate conf dir. (Default: ${SPARK_HOME}/conf)\nexport SPARK_CONF_DIR=${SPARK_CONF_DIR:-{{spark_home}}/conf}\n\n# Where log files are stored.(Default:${SPARK_HOME}/logs)\n#export SPARK_LOG_DIR=${SPARK_HOME:-{{spark_home}}}/logs\nexport SPARK_LOG_DIR={{spark_log_dir}}\n\n# Where the pid file is stored. (Default: /tmp)\nexport SPARK_PID_DIR={{spark_pid_dir}}\n\n#Memory for Master, Worker and history server (default: 1024MB)\nexport SPARK_DAEMON_MEMORY={{spark_daemon_memory}}m\n\n# A string representing this instance of spark.(Default: $USER)\nSPARK_IDENT_STRING=$USER\n\n# The scheduling priority for daemons. (Default: 0)\nSPARK_NICENESS=0\n\nexport HADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}\nexport HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-{{hadoop_conf_dir}}}\n\n# The java implementation to use.\nexport JAVA_HOME={{java_home}}', u'spark_thrift_cmd_opts': u'', u'spark_log_dir': u'/var/log/spark2', u'spark_group': u'spark', u'hive_kerberos_principal': u'{{hive_kerberos_principal}}'}, u'spark2-thrift-sparkconf': {u'spark.shuffle.file.buffer': u'1m', u'spark.yarn.maxAppAttempts': u'1', u'spark.driver.extraLibraryPath': u'{{spark_hadoop_lib_native}}', u'spark.executor.extraJavaOptions': u'-XX:+UseNUMA', u'spark.dynamicAllocation.minExecutors': u'3', u'spark.master': u'{{spark_thrift_master}}', u'spark.sql.autoBroadcastJoinThreshold': u'26214400', u'spark.dynamicAllocation.initialExecutors': u'3', u'spark.eventLog.dir': u'{{spark_history_dir}}', u'spark.sql.hive.metastore.jars': u'/usr/hdp/{{version}}/spark2/standalone-metastore/standalone-metastore-1.21.2.{{version}}-hive3.jar', u'spark.scheduler.mode': u'FAIR', u'spark.shuffle.unsafe.file.output.buffer': u'5m', u'spark.io.compression.lz4.blockSize': u'128kb', u'spark.eventLog.enabled': u'true', u'spark.executor.extraLibraryPath': u'{{spark_hadoop_lib_native}}', u'spark.shuffle.io.serverThreads': u'128', u'spark.shuffle.service.enabled': u'true', u'spark.yarn.executor.failuresValidityInterval': u'2h', u'spark.history.fs.logDirectory': u'{{spark_history_dir}}', u'spark.history.fs.cleaner.enabled': u'true', u'spark.dynamicAllocation.enabled': u'true', u'spark.sql.orc.impl': u'native', u'spark.yarn.queue': u'default', u'spark.sql.orc.filterPushdown': u'true', u'spark.sql.statistics.fallBackToHdfs': u'true', u'spark.hadoop.cacheConf': u'false', u'spark.history.provider': u'org.apache.spark.deploy.history.FsHistoryProvider', u'spark.history.fs.cleaner.maxAge': u'90d', u'spark.unsafe.sorter.spill.reader.buffer.size': u'1m', u'spark.scheduler.allocation.file': u'{{spark_conf}}/spark-thrift-fairscheduler.xml', u'spark.dynamicAllocation.maxExecutors': u'30', u'spark.sql.hive.convertMetastoreOrc': u'true', u'spark.sql.hive.metastore.version': u'3.0', u'spark.shuffle.io.backLog': u'8192', u'spark.history.fs.cleaner.interval': u'7d', u'spark.sql.warehouse.dir': u'/apps/spark/warehouse'}, u'resource-types': {u'yarn.resource-types.yarn.io_gpu.maximum-allocation': u'8', u'yarn.resource-types': u''}, u'mapred-env': {u'jobhistory_heapsize': u'900', u'mapred_log_dir_prefix': u'/var/log/hadoop-mapreduce', u'mapred_pid_dir_prefix': u'/var/run/hadoop-mapreduce', u'content': u'\n # export JAVA_HOME=/home/y/libexec/jdk1.8.0/\n\n export HADOOP_JOB_HISTORYSERVER_HEAPSIZE={{jobhistory_heapsize}}\n\n # We need to add the RFA appender for the mr daemons only;\n # however, HADOOP_MAPRED_LOGGER is shared by the mapred client and the\n # daemons. This will restrict the RFA appender to daemons only.\n export HADOOP_LOGLEVEL=${HADOOP_LOGLEVEL:-INFO}\n export HADOOP_ROOT_LOGGER=${HADOOP_ROOT_LOGGER:-INFO,console}\n export HADOOP_DAEMON_ROOT_LOGGER=${HADOOP_DAEMON_ROOT_LOGGER:-${HADOOP_LOGLEVEL},RFA}\n\n {% if security_enabled %}\n export MAPRED_HISTORYSERVER_OPTS="-Djava.security.auth.login.config={{mapred_jaas_file}} -Djavax.security.auth.useSubjectCredsOnly=false"\n {% endif %}\n\n #export HADOOP_JHS_LOGGER=INFO,RFA # Hadoop JobSummary logger.\n #export HADOOP_IDENT_STRING= #A string representing this instance of hadoop. $USER by default\n #export HADOOP_NICENESS= #The scheduling priority for daemons. Defaults to 0.\n export HADOOP_OPTS="-Dhdp.version=$HDP_VERSION $HADOOP_OPTS"\n export HADOOP_OPTS="-Djava.io.tmpdir={{hadoop_java_io_tmpdir}} $HADOOP_OPTS"\n export JAVA_LIBRARY_PATH="${JAVA_LIBRARY_PATH}:{{hadoop_java_io_tmpdir}}"\n\n # History server logs\n export HADOOP_LOG_DIR={{mapred_log_dir_prefix}}/$USER\n\n # History server pid\n export HADOOP_PID_DIR={{mapred_pid_dir_prefix}}/$USER', u'mapred_user_nofile_limit': u'32768', u'mapred_user_nproc_limit': u'65536', u'mapred_user': u'mapred'}, u'ldap-log4j': {u'content': u'\n # Licensed to the Apache Software Foundation (ASF) under one\n # or more contributor license agreements. See the NOTICE file\n # distributed with this work for additional information\n # regarding copyright ownership. The ASF licenses this file\n # to you under the Apache License, Version 2.0 (the\n # "License"); you may not use this file except in compliance\n # with the License. You may obtain a copy of the License at\n #\n # http://www.apache.org/licenses/LICENSE-2.0\n #\n # Unless required by applicable law or agreed to in writing, software\n # distributed under the License is distributed on an "AS IS" BASIS,\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n\n app.log.dir=${launcher.dir}/../logs\n app.log.file=${launcher.name}.log\n\n log4j.rootLogger=ERROR, drfa\n log4j.logger.org.apache.directory.server.ldap.LdapServer=INFO\n log4j.logger.org.apache.directory=WARN\n\n log4j.appender.stdout=org.apache.log4j.ConsoleAppender\n log4j.appender.stdout.layout=org.apache.log4j.PatternLayout\n log4j.appender.stdout.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n\n\n log4j.appender.drfa=org.apache.log4j.DailyRollingFileAppender\n log4j.appender.drfa.File=${app.log.dir}/${app.log.file}\n log4j.appender.drfa.DatePattern=.yyyy-MM-dd\n log4j.appender.drfa.layout=org.apache.log4j.PatternLayout\n log4j.appender.drfa.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n\n log4j.appender.drfa.MaxFileSize = {{knox_ldap_log_maxfilesize}}MB\n log4j.appender.drfa.MaxBackupIndex = {{knox_ldap_log_maxbackupindex}}', u'knox_ldap_log_maxbackupindex': u'20', u'knox_ldap_log_maxfilesize': u'256'}, u'container-executor': {u'docker_allowed_ro-mounts': u'', u'docker_allowed_volume-drivers': u'', u'docker_allowed_devices': u'', u'gpu_module_enabled': u'false', u'docker_trusted_registries': u'', u'yarn_hierarchy': u'', u'docker_binary': u'/usr/bin/docker', u'content': u'{#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# "License"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#}\n\n#/*\n# * Licensed to the Apache Software Foundation (ASF) under one\n# * or more contributor license agreements. See the NOTICE file\n# * distributed with this work for additional information\n# * regarding copyright ownership. The ASF licenses this file\n# * to you under the Apache License, Version 2.0 (the\n# * "License"); you may not use this file except in compliance\n# * with the License. You may obtain a copy of the License at\n# *\n# * http://www.apache.org/licenses/LICENSE-2.0\n# *\n# * Unless required by applicable law or agreed to in writing, software\n# * distributed under the License is distributed on an "AS IS" BASIS,\n# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# * See the License for the specific language governing permissions and\n# * limitations under the License.\n# */\nyarn.nodemanager.local-dirs={{nm_local_dirs}}\nyarn.nodemanager.log-dirs={{nm_log_dirs}}\nyarn.nodemanager.linux-container-executor.group={{yarn_executor_container_group}}\nbanned.users=hdfs,yarn,mapred,bin\nmin.user.id={{min_user_id}}\n\n{{ \'[docker]\' }}\n module.enabled={{docker_module_enabled}}\n docker.binary={{docker_binary}}\n docker.allowed.capabilities={{docker_allowed_capabilities}}\n docker.allowed.devices={{docker_allowed_devices}}\n docker.allowed.networks={{docker_allowed_networks}}\n docker.allowed.ro-mounts={{nm_local_dirs}},{{docker_allowed_ro_mounts}}\n docker.allowed.rw-mounts={{nm_local_dirs}},{{nm_log_dirs}},{{docker_allowed_rw_mounts}}\n docker.privileged-containers.enabled={{docker_privileged_containers_enabled}}\n docker.trusted.registries={{docker_trusted_registries}}\n docker.allowed.volume-drivers={{docker_allowed_volume_drivers}}\n\n{{ \'[gpu]\' }}\n module.enabled={{gpu_module_enabled}}\n\n{{ \'[cgroups]\' }}\n root={{cgroup_root}}\n yarn-hierarchy={{yarn_hierarchy}}', u'cgroup_root': u'', u'docker_module_enabled': u'false', u'docker_allowed_rw-mounts': u'', u'min_user_id': u'1000', u'docker_privileged-containers_enabled': u'false'}, u'hive-env': {u'hive.heapsize': u'36194', u'alert_ldap_password': u'', u'hive_user_nproc_limit': u'16000', u'hive.atlas.hook': u'false', u'hive_ambari_database': u'MySQL', u'hive_database': u'New MySQL Database', u'hive_security_authorization': u'None', u'enable_heap_dump': u'false', u'hive.log.level': u'INFO', u'hive_database_name': u'hive', u'hive_database_type': u'mysql', u'hive_pid_dir': u'/var/run/hive', u'hive_timeline_logging_enabled': u'false', u'hive_user_nofile_limit': u'32000', u'hive_user': u'hive', u'hive.metastore.heapsize': u'12064', u'content': u'\n# The heap size of the jvm, and jvm args stared by hive shell script can be controlled via:\nif [ "$SERVICE" = "metastore" ]; then\n\n export HADOOP_HEAPSIZE={{hive_metastore_heapsize}} # Setting for HiveMetastore\n export HADOOP_OPTS="$HADOOP_OPTS -Xloggc:{{hive_log_dir}}/hivemetastore-gc-%t.log -XX:+UseG1GC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCCause -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath={{hive_log_dir}}/hms_heapdump.hprof -Dhive.log.dir={{hive_log_dir}} -Dhive.log.file=hivemetastore.log"\n\nfi\n\nif [ "$SERVICE" = "hiveserver2" ]; then\n\n export HADOOP_HEAPSIZE={{hive_heapsize}} # Setting for HiveServer2 and Client\n export HADOOP_OPTS="$HADOOP_OPTS -Xloggc:{{hive_log_dir}}/hiveserver2-gc-%t.log -XX:+UseG1GC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCCause -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath={{hive_log_dir}}/hs2_heapdump.hprof -Dhive.log.dir={{hive_log_dir}} -Dhive.log.file=hiveserver2.log"\n\nfi\n\nexport HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS -Xmx${HADOOP_HEAPSIZE}m"\nexport HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS{{heap_dump_opts}}"\n\n# Larger heap size may be required when running queries over large number of files or partitions.\n# By default hive shell scripts use a heap size of 256 (MB). Larger heap size would also be\n# appropriate for hive server (hwi etc).\n\n\n# Set HADOOP_HOME to point to a specific hadoop install directory\nHADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}\n\nexport HIVE_HOME=${HIVE_HOME:-{{hive_home_dir}}}\n\n# Hive Configuration Directory can be controlled by:\nexport HIVE_CONF_DIR=${HIVE_CONF_DIR:-{{hive_config_dir}}}\n\n# Folder containing extra libraries required for hive compilation/execution can be controlled by:\nif [ "${HIVE_AUX_JARS_PATH}" != "" ]; then\n if [ -f "${HIVE_AUX_JARS_PATH}" ]; then\n export HIVE_AUX_JARS_PATH=${HIVE_AUX_JARS_PATH}\n elif [ -d "/usr/hdp/current/hive-webhcat/share/hcatalog" ]; then\n export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar\n fi\nelif [ -d "/usr/hdp/current/hive-webhcat/share/hcatalog" ]; then\n export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar\nfi\n\nexport METASTORE_PORT={{hive_metastore_port}}\n\n{% if sqla_db_used or lib_dir_available %}\nexport LD_LIBRARY_PATH="$LD_LIBRARY_PATH:{{jdbc_libs_dir}}"\nexport JAVA_LIBRARY_PATH="$JAVA_LIBRARY_PATH:{{jdbc_libs_dir}}"\n{% endif %}', u'heap_dump_location': u'/tmp', u'alert_ldap_username': u'', u'hive_log_dir': u'/var/log/hive'}, u'spark2-log4j-properties': {u'content': u'\n# Set everything to be logged to the console\nlog4j.rootCategory=INFO, console\nlog4j.appender.console=org.apache.log4j.ConsoleAppender\nlog4j.appender.console.target=System.err\nlog4j.appender.console.layout=org.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n\n\n# Settings to quiet third party logs that are too verbose\nlog4j.logger.org.eclipse.jetty=WARN\nlog4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR\nlog4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO\nlog4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO'}, u'ranger-yarn-policymgr-ssl': {}, u'yarn-site': {u'yarn.rm.system-metricspublisher.emit-container-events': u'true', u'yarn.client.nodemanager-connect.max-wait-ms': u'60000', u'yarn.resourcemanager.hostname': u'{host}', u'yarn.node-labels.enabled': u'false', u'yarn.resourcemanager.scheduler.monitor.enable': u'true', u'yarn.nodemanager.aux-services.spark2_shuffle.class': u'org.apache.spark.network.yarn.YarnShuffleService', u'yarn.timeline-service.bind-host': u'0.0.0.0', u'yarn.resourcemanager.ha.enabled': u'false', u'hadoop.registry.dns.bind-port': u'53', u'yarn.webapp.ui2.enable': u'true', u'yarn.nodemanager.runtime.linux.docker.privileged-containers.acl': u'', u'yarn.timeline-service.webapp.address': u'{host}:8188', u'yarn.resourcemanager.state-store.max-completed-applications': u'${yarn.resourcemanager.max-completed-applications}', u'yarn.timeline-service.enabled': u'true', u'yarn.nodemanager.recovery.enabled': u'true', u'yarn.timeline-service.entity-group-fs-store.group-id-plugin-classpath': u'', u'yarn.timeline-service.http-authentication.type': u'simple', u'yarn.nodemanager.container-metrics.unregister-delay-ms': u'60000', u'yarn.resourcemanager.webapp.https.address': u'{host}:8090', u'yarn.timeline-service.entity-group-fs-store.summary-store': u'org.apache.hadoop.yarn.server.timeline.RollingLevelDBTimelineStore', u'yarn.timeline-service.entity-group-fs-store.app-cache-size': u'10', u'yarn.nodemanager.aux-services.spark2_shuffle.classpath': u'{{stack_root}}/${hdp.version}/spark2/aux/*', u'yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users': u'true', u'yarn.resourcemanager.am.max-attempts': u'2', u'yarn.nodemanager.log-aggregation.debug-enabled': u'false', u'yarn.scheduler.maximum-allocation-vcores': u'6', u'yarn.resourcemanager.system-metrics-publisher.enabled': u'true', u'yarn.nodemanager.vmem-pmem-ratio': u'2.1', u'yarn.nodemanager.runtime.linux.allowed-runtimes': u'default,docker', u'hadoop.registry.dns.bind-address': u'0.0.0.0', u'yarn.timeline-service.reader.webapp.address': u'{{timeline_reader_address_http}}', u'yarn.nodemanager.resource.memory-mb': u'82944', u'yarn.resourcemanager.system-metrics-publisher.dispatcher.pool-size': u'10', u'yarn.resourcemanager.zk-num-retries': u'1000', u'yarn.log.server.url': u'http://{host}:19888/jobhistory/logs', u'yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices': u'', u'yarn.application.classpath': u'$HADOOP_CONF_DIR,{{hadoop_home}}/*,{{hadoop_home}}/lib/*,{{stack_root}}/current/hadoop-hdfs-client/*,{{stack_root}}/current/hadoop-hdfs-client/lib/*,{{stack_root}}/current/hadoop-yarn-client/*,{{stack_root}}/current/hadoop-yarn-client/lib/*', u'yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabled': u'true', u'yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled': u'false', u'hadoop.registry.dns.domain-name': u'EXAMPLE.COM', u'yarn.nodemanager.resource.cpu-vcores': u'14', u'yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables': u'', u'yarn.nodemanager.local-dirs': u'/hadoop/yarn/local', u'yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage': u'false', u'yarn.nodemanager.remote-app-log-dir-suffix': u'logs', u'yarn.log.server.web-service.url': u'http://{host}:8188/ws/v1/applicationhistory', u'yarn.resourcemanager.bind-host': u'0.0.0.0', u'yarn.resourcemanager.address': u'{host}:8050', u'yarn.service.framework.path': u'/hdp/apps/${hdp.version}/yarn/service-dep.tar.gz', u'yarn.scheduler.maximum-allocation-mb': u'82944', u'yarn.nodemanager.container-monitor.interval-ms': u'3000', u'yarn.node-labels.fs-store.retry-policy-spec': u'2000, 500', u'yarn.resourcemanager.zk-acl': u'world:anyone:rwcda', u'yarn.timeline-service.leveldb-state-store.path': u'/hadoop/yarn/timeline', u'yarn.scheduler.capacity.ordering-policy.priority-utilization.underutilized-preemption.enabled': u'true', u'yarn.timeline-service.hbase.coprocessor.jar.hdfs.location': u'{{yarn_timeline_jar_location}}', u'yarn.timeline-service.address': u'{host}:10200', u'yarn.log-aggregation-enable': u'true', u'yarn.nodemanager.delete.debug-delay-sec': u'0', u'yarn.timeline-service.store-class': u'org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore', u'yarn.timeline-service.client.retry-interval-ms': u'1000', u'yarn.system-metricspublisher.enabled': u'true', u'yarn.timeline-service.entity-group-fs-store.group-id-plugin-classes': u'org.apache.hadoop.yarn.applications.distributedshell.DistributedShellTimelinePlugin', u'hadoop.registry.zk.quorum': u'{host}:2181,{host}:2181,{host}:2181', u'yarn.nodemanager.aux-services.spark_shuffle.classpath': u'{{stack_root}}/${hdp.version}/spark/aux/*', u'hadoop.http.cross-origin.allowed-origins': u'{{cross_origins}}', u'yarn.nodemanager.aux-services.mapreduce_shuffle.class': u'org.apache.hadoop.mapred.ShuffleHandler', u'hadoop.registry.dns.enabled': u'true', u'yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage': u'90', u'yarn.resourcemanager.zk-timeout-ms': u'10000', u'yarn.resourcemanager.fs.state-store.uri': u' ', u'yarn.nodemanager.linux-container-executor.group': u'hadoop', u'yarn.nodemanager.remote-app-log-dir': u'/app-logs', u'yarn.timeline-service.http-cross-origin.enabled': u'true', u'yarn.timeline-service.entity-group-fs-store.cleaner-interval-seconds': u'3600', u'yarn.nodemanager.runtime.linux.docker.privileged-containers.allowed': u'false', u'yarn.resourcemanager.fs.state-store.retry-policy-spec': u'2000, 500', u'yarn.timeline-service.generic-application-history.store-class': u'org.apache.hadoop.yarn.server.applicationhistoryservice.NullApplicationHistoryStore', u'yarn.timeline-service.http-authentication.proxyuser.root.groups': u'*', u'yarn.admin.acl': u'activity_analyzer,yarn', u'hadoop.registry.dns.zone-mask': u'255.255.255.0', u'yarn.nodemanager.disk-health-checker.min-healthy-disks': u'0.25', u'yarn.resourcemanager.work-preserving-recovery.enabled': u'true', u'yarn.resourcemanager.resource-tracker.address': u'{host}:8025', u'yarn.nodemanager.health-checker.script.timeout-ms': u'60000', u'yarn.resourcemanager.scheduler.class': u'org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler', u'yarn.resourcemanager.nodes.exclude-path': u'/etc/hadoop/conf/yarn.exclude', u'yarn.timeline-service.entity-group-fs-store.active-dir': u'/ats/active/', u'yarn.timeline-service.ttl-ms': u'2678400000', u'yarn.resourcemanager.monitor.capacity.preemption.total_preemption_per_round': u'0.11', u'yarn.nodemanager.resource.percentage-physical-cpu-limit': u'80', u'yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb': u'1000', u'yarn.timeline-service.hbase-schema.prefix': u'prod.', u'yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds': u'3600', u'yarn.timeline-service.state-store-class': u'org.apache.hadoop.yarn.server.timeline.recovery.LeveldbTimelineStateStore', u'yarn.nodemanager.log-dirs': u'/hadoop/yarn/log', u'yarn.resourcemanager.display.per-user-apps': u'true', u'yarn.timeline-service.client.max-retries': u'30', u'yarn.nodemanager.health-checker.interval-ms': u'135000', u'yarn.resourcemanager.monitor.capacity.preemption.monitoring_interval': u'15000', u'yarn.nodemanager.admin-env': u'MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX', u'yarn.nodemanager.resource-plugins': u'', u'yarn.nodemanager.vmem-check-enabled': u'false', u'yarn.acl.enable': u'false', u'yarn.timeline-service.leveldb-timeline-store.read-cache-size': u'104857600', u'yarn.nodemanager.log.retain-seconds': u'604800', u'yarn.nodemanager.aux-services': u'mapreduce_shuffle,spark2_shuffle,{{timeline_collector}}', u'yarn.resourcemanager.webapp.address': u'{host}:8088', u'yarn.timeline-service.http-authentication.simple.anonymous.allowed': u'true', u'yarn.timeline-service.versions': u'1.5f,2.0f', u'yarn.resourcemanager.webapp.cross-origin.enabled': u'true', u'yarn.timeline-service.reader.webapp.https.address': u'{{timeline_reader_address_https}}', u'yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size': u'10000', u'yarn.resourcemanager.monitor.capacity.preemption.natural_termination_factor': u'1', u'yarn.nodemanager.webapp.cross-origin.enabled': u'true', u'yarn.resourcemanager.connect.max-wait.ms': u'900000', u'yarn.http.policy': u'HTTP_ONLY', u'yarn.nodemanager.runtime.linux.docker.capabilities': u'\n CHOWN,DAC_OVERRIDE,FSETID,FOWNER,MKNOD,NET_RAW,SETGID,SETUID,SETFCAP,\n SETPCAP,NET_BIND_SERVICE,SYS_CHROOT,KILL,AUDIT_WRITE', u'yarn.timeline-service.version': u'2.0f', u'yarn.resourcemanager.scheduler.address': u'{host}:8030', u'yarn.nodemanager.runtime.linux.docker.allowed-container-networks': u'host,none,bridge', u'yarn.resourcemanager.monitor.capacity.preemption.intra-queue-preemption.enabled': u'true', u'yarn.nodemanager.recovery.dir': u'{{yarn_log_dir_prefix}}/nodemanager/recovery-state', u'yarn.nodemanager.container-executor.class': u'org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor', u'yarn.resourcemanager.store.class': u'org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore', u'yarn.timeline-service.entity-group-fs-store.retain-seconds': u'604800', u'hadoop.registry.dns.zone-subnet': u'172.17.0.0', u'yarn.scheduler.minimum-allocation-vcores': u'1', u'yarn.nodemanager.resource-plugins.gpu.docker-plugin.nvidiadocker-v1.endpoint': u'', u'yarn.timeline-service.leveldb-timeline-store.path': u'/hadoop/yarn/timeline', u'yarn.scheduler.minimum-allocation-mb': u'1024', u'yarn.timeline-service.ttl-enable': u'true', u'yarn.resourcemanager.zk-address': u'{host}:2181,{host}:2181,{host}:2181', u'yarn.nodemanager.runtime.linux.docker.default-container-network': u'host', u'yarn.log-aggregation.retain-seconds': u'2592000', u'yarn.service.system-service.dir': u'/services', u'yarn.nodemanager.address': u'0.0.0.0:45454', u'yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms': u'300000', u'yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms': u'10000', u'yarn.nodemanager.aux-services.timeline_collector.class': u'org.apache.hadoop.yarn.server.timelineservice.collector.PerNodeTimelineCollectorsAuxService', u'yarn.resourcemanager.admin.address': u'{host}:8141', u'yarn.nodemanager.log-aggregation.compression-type': u'gz', u'yarn.nodemanager.log-aggregation.num-log-files-per-app': u'30', u'yarn.resourcemanager.recovery.enabled': u'true', u'yarn.timeline-service.recovery.enabled': u'true', u'yarn.nodemanager.bind-host': u'0.0.0.0', u'yarn.resourcemanager.zk-retry-interval-ms': u'1000', u'manage.include.files': u'false', u'yarn.timeline-service.hbase.configuration.file': u'file://{{yarn_hbase_conf_dir}}/hbase-site.xml', u'yarn.nodemanager.recovery.supervised': u'true', u'yarn.resourcemanager.placement-constraints.handler': u'scheduler', u'yarn.timeline-service.http-authentication.proxyuser.root.hosts': u'{ambari-host}', u'yarn.node-labels.fs-store.root-dir': u'/system/yarn/node-labels', u'yarn.timeline-service.entity-group-fs-store.scan-interval-seconds': u'60', u'yarn.timeline-service.entity-group-fs-store.done-dir': u'/ats/done/', u'yarn.nodemanager.aux-services.spark_shuffle.class': u'org.apache.spark.network.yarn.YarnShuffleService', u'yarn.webapp.api-service.enable': u'true', u'yarn.client.nodemanager-connect.retry-interval-ms': u'10000', u'yarn.nodemanager.resource-plugins.gpu.docker-plugin': u'', u'yarn.timeline-service.generic-application-history.save-non-am-container-meta-info': u'false', u'yarn.timeline-service.webapp.https.address': u'{host}:8190', u'yarn.resourcemanager.zk-state-store.parent-path': u'/rmstore', u'yarn.resourcemanager.connect.retry-interval.ms': u'30000', u'yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size': u'10000'}, u'livy2-spark-blacklist': {u'content': u'\n #\n # Configuration override / blacklist. Defines a list of properties that users are not allowed\n # to override when starting Spark sessions.\n #\n # This file takes a list of property names (one per line). Empty lines and lines starting with "#"\n # are ignored.\n #\n\n # Disallow overriding the master and the deploy mode.\n spark.master\n spark.submit.deployMode\n\n # Disallow overriding the location of Spark cached jars.\n spark.yarn.jar\n spark.yarn.jars\n spark.yarn.archive\n\n # Don\'t allow users to override the RSC timeout.\n livy.rsc.server.idle_timeout'}, u'ranger-knox-policymgr-ssl': {}, u'yarn-hbase-policy': {u'security.admin.protocol.acl': u'*', u'security.masterregion.protocol.acl': u'*', u'security.client.protocol.acl': u'*'}, u'ranger-hdfs-security': {}, u'livy2-log4j-properties': {u'content': u'\n # Set everything to be logged to the console\n log4j.rootCategory=INFO, console\n log4j.appender.console=org.apache.log4j.ConsoleAppender\n log4j.appender.console.target=System.err\n log4j.appender.console.layout=org.apache.log4j.PatternLayout\n log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n\n\n log4j.logger.org.eclipse.jetty=WARN'}, u'hive-interactive-env': {u'llap_headroom_space': u'12288', u'num_llap_nodes_for_llap_daemons': u'0', u'llap_heap_size': u'0', u'num_llap_nodes': u'0', u'llap_app_name': u'llap0', u'enable_hive_interactive': u'false', u'llap_java_opts': u'-XX:+AlwaysPreTouch {% if java_version > 7 %}-XX:+UseG1GC -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts -XX:InitiatingHeapOccupancyPercent=70 -XX:+UnlockExperimentalVMOptions -XX:G1MaxNewSizePercent=40 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=200{% else %}-XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseParallelGC{% endif %}{{heap_dump_opts}}', u'num_retries_for_checking_llap_status': u'20', u'hive_aux_jars': u'', u'hive_heapsize': u'2048', u'content': u'\nexport HADOOP_OPTS="$HADOOP_OPTS -Xloggc:{{hive_log_dir}}/hiveserverinteractive-gc-%t.log -XX:+UseG1GC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCCause -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath={{hive_log_dir}}/hsi_heapdump.hprof -Dhive.log.dir={{hive_log_dir}} -Dhive.log.file=hiveserver2Interactive.log"\n\n# The heap size of the jvm stared by hive shell script can be controlled via:\nexport HADOOP_HEAPSIZE={{hive_interactive_heapsize}} # Setting for HiveServer2 and Client\n\nexport HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS -Xmx${HADOOP_HEAPSIZE}m"\nexport HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS{{heap_dump_opts}}"\n\n# Larger heap size may be required when running queries over large number of files or partitions.\n# By default hive shell scripts use a heap size of 256 (MB). Larger heap size would also be\n# appropriate for hive server (hwi etc).\n\n\n# Set HADOOP_HOME to point to a specific hadoop install directory\nHADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}\n\n# Hive Configuration Directory can be controlled by:\nexport HIVE_CONF_DIR={{hive_server_interactive_conf_dir}}\n\n# Add additional hcatalog jars\nif [ "${HIVE_AUX_JARS_PATH}" != "" ]; then\n export HIVE_AUX_JARS_PATH=${HIVE_AUX_JARS_PATH}\nelse\n export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-server2/lib/hive-hcatalog-core.jar\nfi\n\nexport METASTORE_PORT={{hive_metastore_port}}', u'llap_log_level': u'INFO'}, u'ranger-hive-audit': {}, u'zeppelin-env': {u'zeppelin_log_dir': u'/var/log/zeppelin', u'zeppelin_env_content': u'\n # export JAVA_HOME=\n export JAVA_HOME={{java64_home}}\n # export MASTER= # Spark master url. eg. spark://master_addr:7077. Leave empty if you want to use local mode.\n export MASTER=yarn-client\n\n # export ZEPPELIN_JAVA_OPTS # Additional jvm options. for example, export ZEPPELIN_JAVA_OPTS="-Dspark.executor.memory=8g -Dspark.cores.max=16"\n # export ZEPPELIN_MEM # Zeppelin jvm mem options Default -Xms1024m -Xmx1024m -XX:MaxPermSize=512m\n # export ZEPPELIN_INTP_MEM # zeppelin interpreter process jvm mem options. Default -Xms1024m -Xmx1024m -XX:MaxPermSize=512m\n # export ZEPPELIN_INTP_JAVA_OPTS # zeppelin interpreter process jvm options.\n # export ZEPPELIN_SSL_PORT # ssl port (used when ssl environment variable is set to true)\n\n # export ZEPPELIN_LOG_DIR # Where log files are stored. PWD by default.\n export ZEPPELIN_LOG_DIR={{zeppelin_log_dir}}\n # export ZEPPELIN_PID_DIR # The pid files are stored. ${ZEPPELIN_HOME}/run by default.\n export ZEPPELIN_PID_DIR={{zeppelin_pid_dir}}\n # export ZEPPELIN_WAR_TEMPDIR # The location of jetty temporary directory.\n # export ZEPPELIN_NOTEBOOK_DIR # Where notebook saved\n # export ZEPPELIN_NOTEBOOK_HOMESCREEN # Id of notebook to be displayed in homescreen. ex) 2A94M5J1Z\n # export ZEPPELIN_NOTEBOOK_HOMESCREEN_HIDE # hide homescreen notebook from list when this value set to "true". default "false"\n # export ZEPPELIN_NOTEBOOK_S3_BUCKET # Bucket where notebook saved\n # export ZEPPELIN_NOTEBOOK_S3_ENDPOINT # Endpoint of the bucket\n # export ZEPPELIN_NOTEBOOK_S3_USER # User in bucket where notebook saved. For example bucket/user/notebook/2A94M5J1Z/note.json\n # export ZEPPELIN_IDENT_STRING # A string representing this instance of zeppelin. $USER by default.\n # export ZEPPELIN_NICENESS # The scheduling priority for daemons. Defaults to 0.\n # export ZEPPELIN_INTERPRETER_LOCALREPO # Local repository for interpreter\'s additional dependency loading\n # export ZEPPELIN_NOTEBOOK_STORAGE # Refers to pluggable notebook storage class, can have two classes simultaneously with a sync between them (e.g. local and remote).\n # export ZEPPELIN_NOTEBOOK_ONE_WAY_SYNC # If there are multiple notebook storages, should we treat the first one as the only source of truth?\n # export ZEPPELIN_NOTEBOOK_PUBLIC # Make notebook public by default when created, private otherwise\n export ZEPPELIN_INTP_CLASSPATH_OVERRIDES="{{external_dependency_conf}}"\n #### Spark interpreter configuration ####\n\n ## Kerberos ticket refresh setting\n ##\n export KINIT_FAIL_THRESHOLD=5\n export KERBEROS_REFRESH_INTERVAL=1d\n\n ## Use provided spark installation ##\n ## defining SPARK_HOME makes Zeppelin run spark interpreter process using spark-submit\n ##\n # export SPARK_HOME # (required) When it is defined, load it instead of Zeppelin embedded Spark libraries\n #export SPARK_HOME={{spark_home}}\n # export SPARK_SUBMIT_OPTIONS # (optional) extra options to pass to spark submit. eg) "--driver-memory 512M --executor-memory 1G".\n # export SPARK_APP_NAME # (optional) The name of spark application.\n\n ## Use embedded spark binaries ##\n ## without SPARK_HOME defined, Zeppelin still able to run spark interpreter process using embedded spark binaries.\n ## however, it is not encouraged when you can define SPARK_HOME\n ##\n # Options read in YARN client mode\n # export HADOOP_CONF_DIR # yarn-site.xml is located in configuration directory in HADOOP_CONF_DIR.\n export HADOOP_CONF_DIR=/etc/hadoop/conf\n # Pyspark (supported with Spark 1.2.1 and above)\n # To configure pyspark, you need to set spark distribution\'s path to \'spark.home\' property in Interpreter setting screen in Zeppelin GUI\n # export PYSPARK_PYTHON # path to the python command. must be the same path on the driver(Zeppelin) and all workers.\n # export PYTHONPATH\n\n ## Spark interpreter options ##\n ##\n # export ZEPPELIN_SPARK_USEHIVECONTEXT # Use HiveContext instead of SQLContext if set true. true by default.\n # export ZEPPELIN_SPARK_CONCURRENTSQL # Execute multiple SQL concurrently if set true. false by default.\n # export ZEPPELIN_SPARK_IMPORTIMPLICIT # Import implicits, UDF collection, and sql if set true. true by default.\n # export ZEPPELIN_SPARK_MAXRESULT # Max number of Spark SQL result to display. 1000 by default.\n # export ZEPPELIN_WEBSOCKET_MAX_TEXT_MESSAGE_SIZE # Size in characters of the maximum text message to be received by websocket. Defaults to 1024000\n\n\n #### HBase interpreter configuration ####\n\n ## To connect to HBase running on a cluster, either HBASE_HOME or HBASE_CONF_DIR must be set\n\n # export HBASE_HOME= # (require) Under which HBase scripts and configuration should be\n # export HBASE_CONF_DIR= # (optional) Alternatively, configuration directory can be set to point to the directory that has hbase-site.xml\n\n # export ZEPPELIN_IMPERSONATE_CMD # Optional, when user want to run interpreter as end web user. eg) \'sudo -H -u ${ZEPPELIN_IMPERSONATE_USER} bash -c \'', u'zeppelin_user': u'zeppelin', u'zeppelin_group': u'zeppelin', u'zeppelin_pid_dir': u'/var/run/zeppelin'}, u'ams-ssl-client': {u'ssl.client.truststore.password': u'bigdata', u'ssl.client.truststore.type': u'jks', u'ssl.client.truststore.location': u'/etc/security/clientKeys/all.jks'}, u'livy2-conf': {u'livy.server.csrf_protection.enabled': u'true', u'livy.impersonation.enabled': u'true', u'livy.server.port': u'8999', u'livy.server.recovery.state-store': u'filesystem', u'livy.server.recovery.state-store.url': u'/livy2-recovery', u'livy.repl.enableHiveContext': u'true', u'livy.server.session.timeout': u'3600000', u'livy.spark.master': u'yarn', u'livy.superusers': u'zeppelin-{clustername}', u'livy.environment': u'production', u'livy.server.recovery.mode': u'recovery'}, u'hst-agent-conf': {u'bundle.logs_to_capture': u'(.*).log$,(.*).out$,(.*).err$', u'upload.retry_interval': u'15', u'upload.retry_count': u'100', u'server.connection_retry_interval': u'10', u'agent.tmp_dir': u'/var/lib/smartsense/hst-agent/data/tmp', u'agent.loglevel': u'INFO', u'security.anonymization.max.heap': u'2048', u'agent.version': u'1.5.0.2.7.0.0-897', u'server.connection_retry_count': u'100'}, u'ams-hbase-env': {u'hbase_pid_dir': u'/var/run/ambari-metrics-collector/', u'regionserver_xmn_size': u'128', u'max_open_files_limit': u'32768', u'hbase_master_maxperm_size': u'128', u'hbase_regionserver_xmn_ratio': u'0.2', u'hbase_master_heapsize': u'1024', u'content': u'\n# Set environment variables here.\n\n# The java implementation to use. Java 1.6+ required.\nexport JAVA_HOME={{java64_home}}\n\n# HBase Configuration directory\nexport HBASE_CONF_DIR=${HBASE_CONF_DIR:-{{hbase_conf_dir}}}\n\n# Extra Java CLASSPATH elements. Optional.\nadditional_cp={{hbase_classpath_additional}}\nif [ -n "$additional_cp" ];\nthen\n export HBASE_CLASSPATH=${HBASE_CLASSPATH}:$additional_cp\nelse\n export HBASE_CLASSPATH=${HBASE_CLASSPATH}\nfi\n\n# The maximum amount of heap to use for hbase shell.\nexport HBASE_SHELL_OPTS="-Xmx256m"\n\n# Extra Java runtime options.\n# Below are what we set by default. May only work with SUN JVM.\n# For more on why as well as other possible settings,\n# see http://wiki.apache.org/hadoop/PerformanceTuning\nexport HBASE_OPTS="-XX:+UseConcMarkSweepGC -XX:ErrorFile={{hbase_log_dir}}/hs_err_pid%p.log -Djava.io.tmpdir={{hbase_tmp_dir}}"\nexport SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:{{hbase_log_dir}}/gc.log-`date +\'%Y%m%d%H%M\'`"\n# Uncomment below to enable java garbage collection logging.\n# export HBASE_OPTS="$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-hbase.log"\n\n# Uncomment and adjust to enable JMX exporting\n# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.\n# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html\n#\n# export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"\n\n{% if java_version < 8 %}\nexport HBASE_MASTER_OPTS=" -XX:PermSize=64m -XX:MaxPermSize={{hbase_master_maxperm_size}} -Xms{{hbase_heapsize}} -Xmx{{hbase_heapsize}} -Xmn{{hbase_master_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly"\nexport HBASE_REGIONSERVER_OPTS="-XX:MaxPermSize=128m -Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms{{regionserver_heapsize}} -Xmx{{regionserver_heapsize}}"\n{% else %}\nexport HBASE_MASTER_OPTS=" -Xms{{hbase_heapsize}} -Xmx{{hbase_heapsize}} -Xmn{{hbase_master_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly"\nexport HBASE_REGIONSERVER_OPTS=" -Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms{{regionserver_heapsize}} -Xmx{{regionserver_heapsize}}"\n{% endif %}\n\n\n# export HBASE_THRIFT_OPTS="$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"\n# export HBASE_ZOOKEEPER_OPTS="$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"\n\n# File naming hosts on which HRegionServers will run. $HBASE_HOME/conf/regionservers by default.\nexport HBASE_REGIONSERVERS=${HBASE_CONF_DIR}/regionservers\n\n# Extra ssh options. Empty by default.\n# export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"\n\n# Where log files are stored. $HBASE_HOME/logs by default.\nexport HBASE_LOG_DIR={{hbase_log_dir}}\n\n# A string representing this instance of hbase. $USER by default.\n# export HBASE_IDENT_STRING=$USER\n\n# The scheduling priority for daemon processes. See \'man nice\'.\n# export HBASE_NICENESS=10\n\n# The directory where pid files are stored. /tmp by default.\nexport HBASE_PID_DIR={{hbase_pid_dir}}\n\n# Seconds to sleep between slave commands. Unset by default. This\n# can be useful in large clusters, where, e.g., slave rsyncs can\n# otherwise arrive faster than the master can service them.\n# export HBASE_SLAVE_SLEEP=0.1\n\n# Tell HBase whether it should manage it\'s own instance of Zookeeper or not.\nexport HBASE_MANAGES_ZK=false\n\n{% if security_enabled %}\nexport HBASE_OPTS="$HBASE_OPTS -Djava.security.auth.login.config={{client_jaas_config_file}}"\nexport HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Djava.security.auth.login.config={{master_jaas_config_file}} -Djavax.security.auth.useSubjectCredsOnly=false"\nexport HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Djava.security.auth.login.config={{regionserver_jaas_config_file}} -Djavax.security.auth.useSubjectCredsOnly=false"\nexport HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Djava.security.auth.login.config={{ams_zookeeper_jaas_config_file}}"\n{% endif %}\n\n# use embedded native libs\n_HADOOP_NATIVE_LIB="/usr/lib/ams-hbase/lib/hadoop-native/"\nexport HBASE_OPTS="$HBASE_OPTS -Djava.library.path=${_HADOOP_NATIVE_LIB}"\n\n# Unset HADOOP_HOME to avoid importing HADOOP installed cluster related configs like: /usr/hdp/2.2.0.0-2041/hadoop/conf/\nexport HADOOP_HOME={{ams_hbase_home_dir}}\n\n# Explicitly Setting HBASE_HOME for AMS HBase so that there is no conflict\nexport HBASE_HOME={{ams_hbase_home_dir}}', u'hbase_classpath_additional': u'', u'hbase_regionserver_heapsize': u'512', u'hbase_log_dir': u'/var/log/ambari-metrics-collector', u'hbase_regionserver_shutdown_timeout': u'30', u'hbase_master_xmn_size': u'256'}, u'hive-atlas-application.properties': {u'atlas.hook.hive.synchronous': u'false', u'atlas.hook.hive.queueSize': u'1000', u'atlas.hook.hive.minThreads': u'5', u'atlas.hook.hive.numRetries': u'3', u'atlas.hook.hive.maxThreads': u'5', u'atlas.hook.hive.keepAliveTime': u'10'}, u'zeppelin-shiro-ini': {u'shiro_ini_content': u'\n[users]\n# List of users with their password allowed to access Zeppelin.\n# To use a different strategy (LDAP / Database / ...) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INISections\n#admin = $shiro1$SHA-256$500000$p6Be9+t2hdUXJQj2D0b1fg==$bea5JIMqcVF3J6eNZGWQ/3eeDByn5iEZDuGsEip06+M=, admin\n#user1 = $shiro1$SHA-256$500000$G2ymy/qmuZnGY6or4v2KfA==$v9fabqWgCNCgechtOUqAQenGDs0OSLP28q2wolPT4wU=, role1, role2\n#user2 = $shiro1$SHA-256$500000$aHBgiuwSgAcP3Xt5mEzeFw==$KosBnN2BNKA9/KHBL0hnU/woJFl+xzJFj12NQ0fnjCU=, role3\n#user3 = $shiro1$SHA-256$500000$nf0GzH10GbYVoxa7DOlOSw==$ov/IA5W8mRWPwvAoBjNYxg3udJK0EmrVMvFCwcr9eAs=, role2\n\n# Sample LDAP configuration, for user Authentication, currently tested for single Realm\n[main]\n### A sample for configuring Active Directory Realm\n#activeDirectoryRealm = org.apache.zeppelin.realm.ActiveDirectoryGroupRealm\n#activeDirectoryRealm.systemUsername = userNameA\n\n#use either systemPassword or hadoopSecurityCredentialPath, more details in http://zeppelin.apache.org/docs/latest/security/shiroauthentication.html\n#activeDirectoryRealm.syst... = passwordA\n#activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/zeppelin.jceks\n#activeDirectoryRealm.searchBase = CN=Users,DC=SOME_GROUP,DC=COMPANY,DC=COM\n#activeDirectoryRealm.url = ldap://ldap.test.com:389\n#activeDirectoryRealm.groupRolesMap = "CN=admin,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"admin","CN=finance,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"finance","CN=hr,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"hr"\n#activeDirectoryRealm.authorizationCachingEnabled = false\n\n### A sample for configuring LDAP Directory Realm\nldapRealm = org.apache.zeppelin.realm.LdapGroupRealm\n## search base for ldap groups (only relevant for LdapGroupRealm):\nldapRealm.contextFactory.environment[ldap.searchBase] = dc=platform,dc={clustername},dc=de\nldapRealm.contextFactory.url = ldap://{ambari-host}:389\nldapRealm.userDnTemplate = uid={0},ou=people,dc=platform,dc={clustername},dc=de\nldapRealm.contextFactory.authenticationMechanism = simple\n\n### A sample PAM configuration\n#pamRealm=org.apache.zeppelin.realm.PamRealm\n#pamRealm.service=sshd\n\n## To be commented out when not using [user] block / paintext\n#passwordMatcher = org.apache.shiro.authc.credential.PasswordMatcher\n#iniRealm.credentialsMatcher = $passwordMatcher\n\nsessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager\n### If caching of user is required then uncomment below lines\ncacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager\nsecurityManager.cacheManager = $cacheManager\n\ncookie = org.apache.shiro.web.servlet.SimpleCookie\ncookie.name = JSESSIONID\n#Uncomment the line below when running Zeppelin-Server in HTTPS mode\n#cookie.secure = true\ncookie.httpOnly = true\nsessionManager.sessionIdCookie = $cookie\n\nsecurityManager.sessionManager = $sessionManager\n# 86,400,000 milliseconds = 24 hour\nsecurityManager.sessionManager.globalSessionTimeout = 86400000\nshiro.loginUrl = /api/login\n\n[roles]\nrole1 = *\nrole2 = *\nrole3 = *\nadmin = *\n\n[urls]\n# This section is used for url-based security.\n# You can secure interpreter, configuration and credential information by urls. Comment or uncomment the below urls that you want to hide.\n# anon means the access is anonymous.\n# authc means Form based Auth Security\n# To enfore security, comment the line below and uncomment the next one\n/api/version = anon\n#/api/interpreter/** = authc, roles[admin]\n#/api/configurations/** = authc, roles[admin]\n#/api/credential/** = authc, roles[admin]\n#/** = anon\n/** = authc'}, u'ams-grafana-ini': {u'cert_key': u'/etc/ambari-metrics-grafana/conf/ams-grafana.key', u'protocol': u'http', u'ca_cert': u'', u'cert_file': u'/etc/ambari-metrics-grafana/conf/ams-grafana.crt', u'content': u'\n##################### Grafana Configuration Example #####################\n#\n# Everything has defaults so you only need to uncomment things you want to\n# change\n\n# possible values : production, development\n; app_mode = production\n\n#################################### Paths ####################################\n[paths]\n# Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used)\n#\n;data = /var/lib/grafana\ndata = {{ams_grafana_data_dir}}\n#\n# Directory where grafana can store logs\n#\n;logs = /var/log/grafana\nlogs = {{ams_grafana_log_dir}}\n\n\n#################################### Server ####################################\n[server]\n# Protocol (http or https)\n;protocol = http\nprotocol = {{ams_grafana_protocol}}\n# The ip address to bind to, empty will bind to all interfaces\n;http_addr =\n\n# The http port to use\n;http_port = 3000\nhttp_port = {{ams_grafana_port}}\n\n# The public facing domain name used to access grafana from a browser\n;domain = localhost\n\n# Redirect to correct domain if host header does not match domain\n# Prevents DNS rebinding attacks\n;enforce_domain = false\n\n# The full public facing url\n;root_url = %(protocol)s://%(domain)s:%(http_port)s/\n\n# Log web requests\n;router_logging = false\n\n# the path relative working path\n;static_root_path = public\nstatic_root_path = /usr/lib/ambari-metrics-grafana/public\n\n# enable gzip\n;enable_gzip = false\n\n# https certs & key file\n;cert_file =\n;cert_key =\ncert_file = {{ams_grafana_cert_file}}\ncert_key = {{ams_grafana_cert_key}}\n\n#################################### Database ####################################\n[database]\n# Either "mysql", "postgres" or "sqlite3", it\'s your choice\n;type = sqlite3\n;host = 127.0.0.1:3306\n;name = grafana\n;user = root\n;password =\n\n# For "postgres" only, either "disable", "require" or "verify-full"\n;ssl_mode = disable\n\n# For "sqlite3" only, path relative to data_path setting\n;path = grafana.db\n\n#################################### Session ####################################\n[session]\n# Either "memory", "file", "redis", "mysql", "postgres", default is "file"\n;provider = file\n\n# Provider config options\n# memory: not have any config yet\n# file: session dir path, is relative to grafana data_path\n# redis: config like redis server e.g. `addr=127.0.0.1:6379,pool_size=100,db=grafana`\n# mysql: go-sql-driver/mysql dsn config string, e.g. `user:password@tcp(127.0.0.1:3306)/database_name`\n# postgres: user=a password=b host=localhost port=5432 dbname=c sslmode=disable\n;provider_config = sessions\n\n# Session cookie name\n;cookie_name = grafana_sess\n\n# If you use session in https only, default is false\n;cookie_secure = false\n\n# Session life time, default is 86400\n;session_life_time = 86400\n\n#################################### Analytics ####################################\n[analytics]\n# Server reporting, sends usage counters to stats.grafana.org every 24 hours.\n# No ip addresses are being tracked, only simple counters to track\n# running instances, dashboard and error counts. It is very helpful to us.\n# Change this option to false to disable reporting.\n;reporting_enabled = true\n\n# Google Analytics universal tracking code, only enabled if you specify an id here\n;google_analytics_ua_id =\n\n#################################### Security ####################################\n[security]\n# default admin user, created on startup\nadmin_user = {{ams_grafana_admin_user}}\n\n# default admin password, can be changed before first start of grafana, or in profile settings\n;admin_password =\n\n# used for signing\n;secret_key = SW2YcwTIb9zpOOhoPsMm\n\n# Auto-login remember days\n;login_remember_days = 7\n;cookie_username = grafana_user\n;cookie_remember_name = grafana_remember\n\n# disable gravatar profile images\n;disable_gravatar = false\n\n# data source proxy whitelist (ip_or_domain:port seperated by spaces)\n;data_source_proxy_whitelist =\n\n#################################### Users ####################################\n[users]\n# disable user signup / registration\n;allow_sign_up = true\n\n# Allow non admin users to create organizations\n;allow_org_create = true\n\n# Set to true to automatically assign new users to the default organization (id 1)\n;auto_assign_org = true\n\n# Default role new users will be automatically assigned (if disabled above is set to true)\n;auto_assign_org_role = Viewer\n\n# Background text for the user field on the login page\n;login_hint = email or username\n\n#################################### Anonymous Auth ##########################\n[auth.anonymous]\n# enable anonymous access\nenabled = true\n\n# specify organization name that should be used for unauthenticated users\norg_name = Main Org.\n\n# specify role for unauthenticated users\n;org_role = Admin\n\n#################################### Github Auth ##########################\n[auth.github]\n;enabled = false\n;allow_sign_up = false\n;client_id = some_id\n;client_secret = some_secret\n;scopes = user:email,read:org\n;auth_url = https://github.com/login/oauth/authorize\n;token_url = https://github.com/login/oauth/access_token\n;api_url = https://api.github.com/user\n;team_ids =\n;allowed_organizations =\n\n#################################### Google Auth ##########################\n[auth.google]\n;enabled = false\n;allow_sign_up = false\n;client_id = some_client_id\n;client_secret = some_client_secret\n;scopes = https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email\n;auth_url = https://accounts.google.com/o/oauth2/auth\n;token_url = https://accounts.google.com/o/oauth2/token\n;api_url = https://www.googleapis.com/oauth2/v1/userinfo\n;allowed_domains =\n\n#################################### Auth Proxy ##########################\n[auth.proxy]\n;enabled = false\n;header_name = X-WEBAUTH-USER\n;header_property = username\n;auto_sign_up = true\n\n#################################### Basic Auth ##########################\n[auth.basic]\n;enabled = true\n\n#################################### Auth LDAP ##########################\n[auth.ldap]\n;enabled = false\n;config_file = /etc/grafana/ldap.toml\n\n#################################### SMTP / Emailing ##########################\n[smtp]\n;enabled = false\n;host = localhost:25\n;user =\n;password =\n;cert_file =\n;key_file =\n;skip_verify = false\n;from_address = admin@grafana.localhost\n\n[emails]\n;welcome_email_on_sign_up = false\n\n#################################### Logging ##########################\n[log]\n# Either "console", "file", default is "console"\n# Use comma to separate multiple modes, e.g. "console, file"\n;mode = console, file\n\n# Buffer length of channel, keep it as it is if you don\'t know what it is.\n;buffer_len = 10000\n\n# Either "Trace", "Debug", "Info", "Warn", "Error", "Critical", default is "Trace"\n;level = Info\n\n# For "console" mode only\n[log.console]\n;level =\n\n# For "file" mode only\n[log.file]\n;level =\n# This enables automated log rotate(switch of following options), default is true\n;log_rotate = true\n\n# Max line number of single file, default is 1000000\n;max_lines = 1000000\n\n# Max size shift of single file, default is 28 means 1 << 28, 256MB\n;max_lines_shift = 28\n\n# Segment log daily, default is true\n;daily_rotate = true\n\n# Expired days of log file(delete after max days), default is 7\n;max_days = 7\n\n#################################### AMPQ Event Publisher ##########################\n[event_publisher]\n;enabled = false\n;rabbitmq_url = amqp://localhost/\n;exchange = grafana_events\n\n;#################################### Dashboard JSON files ##########################\n[dashboards.json]\n;enabled = false\n;path = /var/lib/grafana/dashboards\npath = /usr/lib/ambari-metrics-grafana/public/dashboards', u'port': u'3000'}, u'livy2-env': {u'content': u'\n #!/usr/bin/env bash\n\n # - SPARK_HOME Spark which you would like to use in livy\n # - SPARK_CONF_DIR Directory containing the Spark configuration to use.\n # - HADOOP_CONF_DIR Directory containing the Hadoop / YARN configuration to use.\n # - LIVY_LOG_DIR Where log files are stored. (Default: ${LIVY_HOME}/logs)\n # - LIVY_PID_DIR Where the pid file is stored. (Default: /tmp)\n # - LIVY_SERVER_JAVA_OPTS Java Opts for running livy server (You can set jvm related setting here, like jvm memory/gc algorithm and etc.)\n export SPARK_HOME=/usr/hdp/current/spark2-client\n export SPARK_CONF_DIR=/etc/spark2/conf\n export JAVA_HOME={{java_home}}\n export HADOOP_CONF_DIR=/etc/hadoop/conf\n export LIVY_LOG_DIR={{livy2_log_dir}}\n export LIVY_PID_DIR={{livy2_pid_dir}}\n export LIVY_SERVER_JAVA_OPTS="-Xmx2g"', u'livy2_log_dir': u'/var/log/livy2', u'livy2_group': u'livy', u'spark_home': u'/usr/hdp/current/spark2-client', u'livy2_user': u'livy', u'livy2_pid_dir': u'/var/run/livy2'}, u'hive-site': {u'hive.tez.input.generate.consistent.splits': u'true', u'javax.jdo.option.ConnectionDriverName': u'com.mysql.jdbc.Driver', u'hive.fetch.task.aggr': u'false', u'hive.tez.java.opts': u'-server -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps', u'hive.server2.table.type.mapping': u'CLASSIC', u'hive.tez.min.partition.factor': u'0.25', u'hive.tez.cpu.vcores': u'-1', u'hive.conf.restricted.list': u'hive.security.authenticator.manager,hive.security.authorization.manager,hive.users.in.admin.role', u'hive.stats.dbclass': u'fs', u'hive.execution.mode': u'container', u'hive.tez.auto.reducer.parallelism': u'true', u'hive.fetch.task.conversion': u'more', u'hive.server2.thrift.http.path': u'cliservice', u'hive.exec.scratchdir': u'/tmp/hive', u'hive.exec.post.hooks': u'org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook', u'hive.zookeeper.namespace': u'hive_zookeeper_namespace', u'hive.cbo.enable': u'true', u'hive.optimize.index.filter': u'true', u'hive.optimize.bucketmapjoin': u'true', u'hive.mapjoin.bucket.cache.size': u'10000', u'hive.limit.optimize.enable': u'true', u'hive.fetch.task.conversion.threshold': u'1073741824', u'hive.exec.max.dynamic.partitions': u'5000', u'hive.server2.webui.use.ssl': u'false', u'hive.metastore.sasl.enabled': u'false', u'hive.txn.manager': u'org.apache.hadoop.hive.ql.lockmgr.DbTxnManager', u'hive.optimize.constant.propagation': u'true', u'hive.vectorized.execution.mapjoin.minmax.enabled': u'true', u'hive.exec.submitviachild': u'false', u'hive.metastore.kerberos.principal': u'hive/_HOST@EXAMPLE.COM', u'hive.txn.max.open.batch': u'1000', u'hive.exec.compress.output': u'false', u'hive.merge.size.per.task': u'256000000', u'hive.metastore.uris': u'thrift://{host}:9083', u'hive.heapsize': u'1024', u'hive.security.authenticator.manager': u'org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator', u'hive.merge.mapfiles': u'true', u'hive.compactor.initiator.on': u'true', u'hive.txn.strict.locking.mode': u'false', u'hive.mapjoin.optimized.hashtable': u'true', u'hive.default.fileformat': u'TextFile', u'hive.optimize.metadataonly': u'true', u'hive.tez.dynamic.partition.pruning.max.event.size': u'1048576', u'hive.server2.thrift.max.worker.threads': u'500', u'hive.optimize.sort.dynamic.partition': u'false', u'hive.server2.enable.doAs': u'true', u'hive.metastore.pre.event.listeners': u'org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener', u'hive.metastore.failure.retries': u'24', u'hive.merge.smallfiles.avgsize': u'16000000', u'hive.tez.max.partition.factor': u'2.0', u'hive.server2.transport.mode': u'binary', u'hive.tez.container.size': u'27648', u'hive.optimize.bucketmapjoin.sortedmerge': u'false', u'hive.lock.manager': u'', u'hive.compactor.worker.threads': u'1', u'hive.security.metastore.authorization.manager': u'org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider', u'hive.map.aggr.hash.percentmemory': u'0.5', u'hive.user.install.directory': u'/user/', u'datanucleus.autoCreateSchema': u'false', u'hive.compute.query.using.stats': u'true', u'hive.merge.rcfile.block.level': u'true', u'hive.map.aggr': u'true', u'hive.repl.rootdir': u'', u'hive.metastore.client.connect.retry.delay': u'5s', u'hive.server2.idle.operation.timeout': u'6h', u'hive.security.authorization.enabled': u'false', u'atlas.hook.hive.minThreads': u'1', u'hive.server2.tez.default.queues': u'default', u'hive.prewarm.enabled': u'false', u'hive.exec.reducers.max': u'1009', u'hive.metastore.kerberos.keytab.file': u'/etc/security/keytabs/hive.service.keytab', u'hive.stats.fetch.partition.stats': u'true', u'hive.cli.print.header': u'false', u'hive.server2.thrift.sasl.qop': u'auth', u'hive.server2.support.dynamic.service.discovery': u'true', u'hive.server2.thrift.port': u'10000', u'hive.exec.reducers.bytes.per.reducer': u'67108864', u'hive.driver.parallel.compilation': u'true', u'hive.compactor.abortedtxn.threshold': u'1000', u'hive.tez.dynamic.partition.pruning.max.data.size': u'104857600', u'hive.metastore.warehouse.dir': u'/warehouse/tablespace/managed/hive', u'hive.tez.cartesian-product.enabled': u'true', u'hive.metastore.client.socket.timeout': u'1800s', u'hive.server2.zookeeper.namespace': u'hiveserver2', u'hive.prewarm.numcontainers': u'3', u'hive.cluster.delegation.token.store.class': u'org.apache.hadoop.hive.thrift.ZooKeeperTokenStore', u'hive.security.metastore.authenticator.manager': u'org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator', u'atlas.hook.hive.maxThreads': u'1', u'hive.auto.convert.join': u'true', u'hive.server2.authentication.spnego.keytab': u'HTTP/_HOST@EXAMPLE.COM', u'hive.mapred.reduce.tasks.speculative.execution': u'false', u'hive.optimize.dynamic.partition.hashjoin': u'true', u'hive.load.data.owner': u'hive', u'javax.jdo.option.ConnectionURL': u'jdbc:mysql://{host}/hive?createDatabaseIfNotExist=true', u'hive.tez.exec.print.summary': u'true', u'hive.exec.dynamic.partition.mode': u'nonstrict', u'hive.auto.convert.sortmerge.join': u'true', u'hive.zookeeper.quorum': u'{host}:2181,{host}:2181,{host}:2181', u'hive.cluster.delegation.token.store.zookeeper.znode': u'/hive/cluster/delegation', u'hive.tez.smb.number.waves': u'0.5', u'hive.exec.parallel': u'false', u'hive.exec.compress.intermediate': u'false', u'hive.server2.webui.cors.allowed.headers': u'X-Requested-With,Content-Type,Accept,Origin,X-Requested-By,x-requested-by', u'hive.txn.timeout': u'300', u'hive.metastore.authorization.storage.checks': u'false', u'hive.metastore.cache.pinobjtypes': u'Table,Database,Type,FieldSchema,Order', u'hive.server2.logging.operation.enabled': u'true', u'hive.merge.tezfiles': u'false', u'hive.exec.parallel.thread.number': u'8', u'hive.auto.convert.join.noconditionaltask': u'true', u'hive.server2.authentication.kerberos.principal': u'hive/_HOST@EXAMPLE.COM', u'hive.compactor.worker.timeout': u'86400', u'hive.repl.cm.enabled': u'', u'hive.vectorized.execution.mapjoin.native.fast.hashtable.enabled': u'true', u'hive.optimize.null.scan': u'true', u'hive.server2.tez.initialize.default.sessions': u'false', u'datanucleus.cache.level2.type': u'none', u'hive.metastore.event.listeners': u'', u'hive.stats.autogather': u'true', u'hive.server2.use.SSL': u'false', u'hive.exec.submit.local.task.via.child': u'true', u'hive.merge.mapredfiles': u'false', u'hive.vectorized.execution.enabled': u'true', u'hive.tez.bucket.pruning': u'true', u'hive.cluster.delegation.token.store.zookeeper.connectString': u'{host}:2181,{host}:2181,{host}:2181', u'hive.vectorized.execution.mapjoin.native.enabled': u'true', u'hive.auto.convert.sortmerge.join.to.mapjoin': u'true', u'hive.optimize.reducededuplication': u'true', u'hive.server2.tez.sessions.per.default.queue': u'1', u'hive.exec.max.dynamic.partitions.pernode': u'2000', u'hive.tez.dynamic.partition.pruning': u'true', u'datanucleus.fixedDatastore': u'true', u'hive.server2.webui.port': u'10002', u'hive.hook.proto.base-directory': u'{hive_metastore_warehouse_external_dir}/sys.db/query_data/', u'hive.create.as.insert.only': u'true', u'hive.limit.pushdown.memory.usage': u'0.04', u'hive.security.metastore.authorization.auth.reads': u'true', u'ambari.hive.db.schema.name': u'hive', u'hive.vectorized.groupby.checkinterval': u'4096', u'hive.smbjoin.cache.rows': u'10000', u'hive.metastore.execute.setugi': u'true', u'hive.zookeeper.client.port': u'2181', u'hive.vectorized.groupby.maxentries': u'100000', u'hive.server2.authentication.spnego.principal': u'/etc/security/keytabs/spnego.service.keytab', u'hive.server2.authentication.kerberos.keytab': u'/etc/security/keytabs/hive.service.keytab', u'javax.jdo.option.ConnectionPassword': u'F1Yx6JGOyEE6annINh2k', u'hive.exec.max.created.files': u'100000', u'hive.default.fileformat.managed': u'ORC', u'hive.map.aggr.hash.min.reduction': u'0.5', u'hive.server2.max.start.attempts': u'5', u'hive.server2.thrift.http.port': u'10001', u'hive.metastore.transactional.event.listeners': u'org.apache.hive.hcatalog.listener.DbNotificationListener', u'hive.orc.splits.include.file.footer': u'false', u'hive.repl.cmrootdir': u'', u'hive.exec.pre.hooks': u'org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook', u'hive.security.authorization.manager': u'org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdConfOnlyAuthorizerFactory', u'hive.merge.orcfile.stripe.level': u'true', u'hive.exec.failure.hooks': u'org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook', u'hive.server2.allow.user.substitution': u'true', u'hive.metastore.connect.retries': u'24', u'hive.metastore.server.max.threads': u'100000', u'hive.vectorized.groupby.flush.percent': u'0.1', u'hive.vectorized.execution.reduce.enabled': u'true', u'hive.enforce.sortmergebucketmapjoin': u'true', u'hive.auto.convert.join.noconditionaltask.size': u'7730941132', u'javax.jdo.option.ConnectionUserName': u'hive', u'hive.server2.webui.enable.cors': u'true', u'hive.tez.log.level': u'INFO', u'hive.compactor.delta.num.threshold': u'10', u'hive.exec.dynamic.partition': u'true', u'hive.server2.authentication': u'NONE', u'hive.stats.fetch.column.stats': u'true', u'hive.orc.compute.splits.num.threads': u'10', u'hive.strict.managed.tables': u'true', u'hive.mapjoin.hybridgrace.hashtable': u'false', u'metastore.create.as.acid': u'true', u'hive.convert.join.bucket.mapjoin.tez': u'false', u'hive.optimize.reducededuplication.min.reducer': u'4', u'hive.metastore.warehouse.external.dir': u'/warehouse/tablespace/external/hive', u'hive.server2.logging.operation.log.location': u'/tmp/hive/operation_logs', u'hive.metastore.dml.events': u'true', u'hive.tez.input.format': u'org.apache.hadoop.hive.ql.io.HiveInputFormat', u'hive.exec.orc.split.strategy': u'HYBRID', u'hive.support.concurrency': u'true', u'hive.server2.idle.session.timeout': u'1d', u'hive.metastore.db.type': u'{{hive_metastore_db_type}}', u'hive.materializedview.rewriting.incremental': u'false', u'hive.compactor.check.interval': u'300', u'hive.compactor.delta.pct.threshold': u'0.1f', u'hive.map.aggr.hash.force.flush.memory.threshold': u'0.9', u'hive.service.metrics.codahale.reporter.classes': u'org.apache.hadoop.hive.common.metrics.metrics2.JsonFileMetricsReporter,org.apache.hadoop.hive.common.metrics.metrics2.JmxMetricsReporter,org.apache.hadoop.hive.common.metrics.metrics2.Metrics2Reporter'}, u'tez-env': {u'content': u'\n# Tez specific configuration\nexport TEZ_CONF_DIR={{config_dir}}\n\n# Set HADOOP_HOME to point to a specific hadoop install directory\nexport HADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}', u'enable_heap_dump': u'false', u'tez_user': u'tez', u'heap_dump_location': u'/tmp'}, u'hive-interactive-site': {u'hive.tez.input.generate.consistent.splits': u'true', u'hive.llap.client.consistent.splits': u'true', u'hive.llap.io.allocator.mmap': u'false', u'hive.server2.thrift.http.port': u'10501', u'hive.druid.storage.storageDirectory': u'{{druid_storage_dir}}', u'hive.metastore.event.listeners': u'', u'hive.llap.enable.grace.join.in.llap': u'false', u'hive.llap.io.enabled': u'true', u'hive.llap.daemon.yarn.container.mb': u'1024', u'hive.limit.optimize.enable': u'false', u'hive.server2.webui.use.ssl': u'false', u'hive.server2.tez.sessions.custom.queue.allowed': u'ignore', u'hive.vectorized.execution.mapjoin.minmax.enabled': u'true', u'hive.load.data.owner': u'hive', u'hive.druid.maxTries': u'5', u'hive.llap.daemon.queue.name': u'llap', u'hive.druid.indexer.partition.size.max': u'1000000', u'hive.txn.strict.locking.mode': u'false', u'hive.llap.task.scheduler.locality.delay': u'8000', u'llap.shuffle.connection-keep-alive.timeout': u'60', u'hive.server2.enable.doAs': u'false', u'hive.merge.nway.joins': u'false', u'dfs.client.mmap.enabled': u'false', u'hive.druid.http.read.timeout': u'PT10M', u'hive.llap.daemon.task.scheduler.enable.preemption': u'true', u'hive.tez.container.size': u'4096', u'hive.tez.bucket.pruning': u'true', u'hive.druid.indexer.memory.rownum.max': u'75000', u'hive.lock.manager': u'', u'dfs.short.circuit.shared.memory.watcher.interrupt.check.ms': u'0', u'hive.llap.mapjoin.memory.oversubscribe.factor': u'0.3f', u'hive.llap.daemon.vcpus.per.instance': u'{hive_llap_daemon_num_executors}', u'hive.server2.idle.operation.timeout': u'6h', u'hive.server2.tez.default.queues': u'llap', u'hive.prewarm.enabled': u'false', u'hive.druid.working.directory': u'/tmp/druid-indexing', u'hive.server2.thrift.port': u'10500', u'hive.druid.metadata.username': u'druid', u'hive.driver.parallel.compilation': u'true', u'hive.llap.daemon.num.executors': u'0', u'hive.tez.cartesian-product.enabled': u'true', u'hive.server2.zookeeper.namespace': u'hiveserver2-interactive', u'hive.druid.coordinator.address.default': u'localhost:8082', u'hive.strict.managed.tables': u'true', u'hive.tez.exec.print.summary': u'true', u'hive.llap.daemon.yarn.shuffle.port': u'15551', u'hive.server2.webui.cors.allowed.headers': u'X-Requested-With,Content-Type,Accept,Origin,X-Requested-By,x-requested-by', u'hive.llap.daemon.am.liveness.heartbeat.interval.ms': u'10000ms', u'hive.llap.object.cache.enabled': u'true', u'llap.shuffle.connection-keep-alive.enable': u'true', u'hive.druid.broker.address.default': u'localhost:8082', u'hive.llap.daemon.service.hosts': u'@llap0', u'hive.druid.metadata.password': u'{{druid_metadata_password}}', u'hive.druid.metadata.uri': u'jdbc:mysql://localhost:3355/druid', u'hive.optimize.dynamic.partition.hashjoin': u'true', u'hive.vectorized.execution.mapjoin.native.fast.hashtable.enabled': u'true', u'hive.server2.tez.initialize.default.sessions': u'true', u'hive.druid.overlord.address.default': u'localhost:8090', u'hive.druid.passiveWaitTimeMs': u'30000', u'hive.llap.io.memory.size': u'0', u'hive.vectorized.execution.mapjoin.native.enabled': u'true', u'hive.vectorized.execution.reduce.enabled': u'true', u'hive.druid.basePersistDirectory': u'', u'hive.server2.tez.sessions.per.default.queue': u'1', u'hive.llap.io.allocator.mmap.path': u'', u'hive.server2.webui.port': u'10502', u'hive.vectorized.groupby.maxentries': u'1000000', u'hive.execution.mode': u'llap', u'hive.llap.daemon.rpc.port': u'0', u'hive.map.aggr.hash.min.reduction': u'0.99', u'hive.llap.zk.sm.connectionString': u'{host}:2181,{host}:2181,{host}:2181', u'hive.server2.tez.sessions.restricted.configs': u'hive.execution.mode,hive.execution.engine', u'hive.druid.metadata.db.type': u'mysql', u'hive.druid.select.distribute': u'true', u'hive.llap.io.threadpool.size': u'0', u'hive.exec.orc.split.strategy': u'HYBRID', u'hive.llap.io.memory.mode': u'', u'hive.server2.active.passive.ha.registry.namespace': u'hs2ActivePassiveHA', u'hive.llap.auto.allow.uber': u'false', u'hive.metastore.uris': u'thrift://{host}:9083', u'hive.llap.daemon.logger': u'query-routing', u'hive.auto.convert.join.noconditionaltask.size': u'1145044992', u'hive.druid.bitmap.type': u'roaring', u'hive.server2.webui.enable.cors': u'true', u'hive.mapjoin.hybridgrace.hashtable': u'false', u'hive.druid.indexer.segments.granularity': u'DAY', u'hive.server2.idle.session.timeout': u'1d', u'hive.llap.execution.mode': u'only', u'hive.materializedview.rewriting.incremental': u'false', u'hive.llap.management.rpc.port': u'15004', u'hive.llap.io.use.lrfu': u'true'}, u'yarn-env': {u'content': u'\nexport HADOOP_YARN_HOME={{hadoop_yarn_home}}\nexport HADOOP_LOG_DIR={{yarn_log_dir}}\nexport HADOOP_SECURE_LOG_DIR={{yarn_log_dir}}\nexport HADOOP_PID_DIR={{yarn_pid_dir}}\nexport HADOOP_SECURE_PID_DIR={{yarn_pid_dir}}\nexport HADOOP_LIBEXEC_DIR={{hadoop_libexec_dir}}\nexport JAVA_HOME={{java64_home}}\nexport JAVA_LIBRARY_PATH="${JAVA_LIBRARY_PATH}:{{hadoop_java_io_tmpdir}}"\n\n# We need to add the EWMA and RFA appender for the yarn daemons only;\n# however, HADOOP_ROOT_LOGGER is shared by the yarn client and the\n# daemons. This is restrict the EWMA appender to daemons only.\nexport HADOOP_LOGLEVEL=${HADOOP_LOGLEVEL:-INFO}\nexport HADOOP_ROOT_LOGGER=${HADOOP_ROOT_LOGGER:-INFO,console}\nexport HADOOP_DAEMON_ROOT_LOGGER=${HADOOP_DAEMON_ROOT_LOGGER:-${HADOOP_LOGLEVEL},EWMA,RFA}\n\n# User for YARN daemons\nexport HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn}\n\n# some Java parameters\n# export JAVA_HOME=/home/y/libexec/jdk1.6.0/\nif [ "$JAVA_HOME" != "" ]; then\n#echo "run java in $JAVA_HOME"\nJAVA_HOME=$JAVA_HOME\nfi\n\nif [ "$JAVA_HOME" = "" ]; then\necho "Error: JAVA_HOME is not set."\nexit 1\nfi\n\nJAVA=$JAVA_HOME/bin/java\nJAVA_HEAP_MAX=-Xmx1000m\n\n# For setting YARN specific HEAP sizes please use this\n# Parameter and set appropriately\nYARN_HEAPSIZE={{yarn_heapsize}}\n\n# check envvars which might override default args\nif [ "$YARN_HEAPSIZE" != "" ]; then\nJAVA_HEAP_MAX="-Xmx""$YARN_HEAPSIZE""m"\nfi\n\n# Resource Manager specific parameters\n\n# Specify the max Heapsize for the ResourceManager using a numerical value\n# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set\n# the value to 1000.\n# This value will be overridden by an Xmx setting specified in either HADOOP_OPTS\n# and/or YARN_RESOURCEMANAGER_OPTS.\n# If not specified, the default value will be picked from either YARN_HEAPMAX\n# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.\nexport YARN_RESOURCEMANAGER_HEAPSIZE={{resourcemanager_heapsize}}\n\n# Specify the JVM options to be used when starting the ResourceManager.\n# These options will be appended to the options specified as HADOOP_OPTS\n# and therefore may override any similar flags set in HADOOP_OPTS\n{% if security_enabled %}\nexport YARN_RESOURCEMANAGER_OPTS="-Djava.security.auth.login.config={{yarn_jaas_file}}"\n{% endif %}\n\n# Node Manager specific parameters\n\n# Specify the max Heapsize for the NodeManager using a numerical value\n# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set\n# the value to 1000.\n# This value will be overridden by an Xmx setting specified in either HADOOP_OPTS\n# and/or YARN_NODEMANAGER_OPTS.\n# If not specified, the default value will be picked from either YARN_HEAPMAX\n# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.\nexport YARN_NODEMANAGER_HEAPSIZE={{nodemanager_heapsize}}\n\n# Specify the max Heapsize for the timeline server using a numerical value\n# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set\n# the value to 1024.\n# This value will be overridden by an Xmx setting specified in either HADOOP_OPTS\n# and/or YARN_TIMELINESERVER_OPTS.\n# If not specified, the default value will be picked from either YARN_HEAPMAX\n# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.\nexport YARN_TIMELINESERVER_HEAPSIZE={{apptimelineserver_heapsize}}\n\n{% if security_enabled %}\nexport YARN_TIMELINESERVER_OPTS="-Djava.security.auth.login.config={{yarn_ats_jaas_file}}"\n{% endif %}\n\n{% if security_enabled %}\nexport YARN_TIMELINEREADER_OPTS="-Djava.security.auth.login.config={{yarn_ats_jaas_file}}"\n{% endif %}\n\n{% if security_enabled %}\nexport YARN_REGISTRYDNS_OPTS="-Djava.security.auth.login.config={{yarn_registry_dns_jaas_file}}"\n{% endif %}\n\n# Specify the JVM options to be used when starting the NodeManager.\n# These options will be appended to the options specified as HADOOP_OPTS\n# and therefore may override any similar flags set in HADOOP_OPTS\n{% if security_enabled %}\nexport YARN_NODEMANAGER_OPTS="-Djava.security.auth.login.config={{yarn_nm_jaas_file}} -Dsun.security.krb5.rcache=none"\n{% endif %}\n\n# so that filenames w/ spaces are handled correctly in loops below\nIFS=\n\n\n# default log directory and file\nif [ "$HADOOP_LOG_DIR" = "" ]; then\nHADOOP_LOG_DIR="$HADOOP_YARN_HOME/logs"\nfi\nif [ "$HADOOP_LOGFILE" = "" ]; then\nHADOOP_LOGFILE=\'yarn.log\'\nfi\n\n# default policy file for service-level authorization\nif [ "$YARN_POLICYFILE" = "" ]; then\nYARN_POLICYFILE="hadoop-policy.xml"\nfi\n\n# restore ordinary behaviour\nunset IFS\n\n# YARN now uses specific subcommand options of the pattern (command)_(subcommand)_OPTS for every\n# component. Because of this, HADDOP_OPTS is now used as a simple way to specify common properties\n# between all YARN components.\nHADOOP_OPTS="$HADOOP_OPTS -Dyarn.id.str=$YARN_IDENT_STRING"\nHADOOP_OPTS="$HADOOP_OPTS -Dyarn.policy.file=$YARN_POLICYFILE"\nHADOOP_OPTS="$HADOOP_OPTS -Djava.io.tmpdir={{hadoop_java_io_tmpdir}}"\n\n{% if security_enabled %}\nHADOOP_OPTS="$HADOOP_OPTS -Djavax.security.auth.useSubjectCredsOnly=false"\n{% endif %}\n\nexport YARN_NODEMANAGER_OPTS="$YARN_NODEMANAGER_OPTS -Dnm.audit.logger=INFO,NMAUDIT"\nexport YARN_RESOURCEMANAGER_OPTS="$YARN_RESOURCEMANAGER_OPTS -Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY -Drm.audit.logger=INFO,RMAUDIT"\n\n{% if registry_dns_needs_privileged_access %}\n# If the DNS server is configured to use the standard privileged port 53,\n# the environment variables YARN_REGISTRYDNS_SECURE_USER and\n# YARN_REGISTRYDNS_SECURE_EXTRA_OPTS must be set.\nexport YARN_REGISTRYDNS_SECURE_USER={{yarn_user}}\nexport YARN_REGISTRYDNS_SECURE_EXTRA_OPTS="-jvm server"\n{% endif %}', u'yarn_user_nproc_limit': u'65536', u'resourcemanager_heapsize': u'1024', u'yarn_cgroups_enabled': u'false', u'is_supported_yarn_ranger': u'true', u'yarn_ats_user_keytab': u'', u'yarn_ats_user': u'yarn-ats', u'nodemanager_heapsize': u'1024', u'yarn_pid_dir_prefix': u'/var/run/hadoop-yarn', u'service_check.queue.name': u'default', u'apptimelineserver_heapsize': u'8072', u'registry.dns.bind-port': u'53', u'yarn_user_nofile_limit': u'32768', u'yarn_user': u'yarn', u'min_user_id': u'1000', u'yarn_heapsize': u'1024', u'yarn_ats_principal_name': u'', u'yarn_log_dir_prefix': u'/var/log/hadoop-yarn'}, u'beeline-log4j2': {u'content': u'\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# "License"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nstatus = INFO\nname = BeelineLog4j2\npackages = org.apache.hadoop.hive.ql.log\n\n# list of properties\nproperty.hive.log.level = {{hive_log_level}}\nproperty.hive.root.logger = console\n\n# list of all appenders\nappenders = console\n\n# console appender\nappender.console.type = Console\nappender.console.name = console\nappender.console.target = SYSTEM_ERR\nappender.console.layout.type = PatternLayout\nappender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} [%t]: %p %c{2}: %m%n\n\n# list of all loggers\nloggers = HiveConnection\n\n# HiveConnection logs useful info for dynamic service discovery\nlogger.HiveConnection.name = org.apache.hive.jdbc.HiveConnection\nlogger.HiveConnection.level = INFO\n\n# root logger\nrootLogger.level = ${sys:hive.log.level}\nrootLogger.appenderRefs = root\nrootLogger.appenderRef.root.ref = ${sys:hive.root.logger}'}, u'ranger-yarn-security': {}, u'ssl-server': {u'ssl.server.keystore.location': u'/etc/security/serverKeys/keystore.jks', u'ssl.server.keystore.keypassword': u'bigdata', u'ssl.server.truststore.location': u'/etc/security/serverKeys/all.jks', u'ssl.server.keystore.password': u'bigdata', u'ssl.server.truststore.password': u'bigdata', u'ssl.server.truststore.type': u'jks', u'ssl.server.keystore.type': u'jks', u'ssl.server.truststore.reload.interval': u'10000'}, u'hst-server-conf': {u'server.tmp.dir': u'/var/lib/smartsense/hst-server/tmp', u'server.url': u'http://{host}:9000', u'server.port': u'9000', u'customer.enable.flex.subscription': u'false', u'customer.account.name': u'unspecified', u'agent.request.processing.timeout': u'7200', u'server.ssl_enabled': u'false', u'customer.flex.subscription.id': u'', u'server.max.heap': u'2048', u'client.threadpool.size.max': u'40', u'agent.request.syncup.interval': u'180', u'gateway.registration.port': u'9450', u'gateway.port': u'9451', u'customer.smartsense.id': u'unspecified', u'gateway.host': u'embedded', u'customer.notification.email': u'unspecified', u'server.storage.dir': u'/var/lib/smartsense/hst-server/data'}, u'llap-daemon-log4j': {u'content': u'\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# "License"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n# This is the log4j2 properties file used by llap-daemons. There\'s several loggers defined, which\n# can be selected while configuring LLAP.\n# Based on the one selected - UI links etc need to be manipulated in the system.\n# Note: Some names and logic is common to this file and llap LogHelpers. Make sure to change that\n# as well, if changing this file.\n\nstatus = INFO\nname = LlapDaemonLog4j2\npackages = org.apache.hadoop.hive.ql.log\n\n# list of properties\nproperty.llap.daemon.log.level = {{hive_log_level}}\nproperty.llap.daemon.root.logger = console\nproperty.llap.daemon.log.dir = .\nproperty.llap.daemon.log.file = llapdaemon.log\nproperty.llap.daemon.historylog.file = llapdaemon_history.log\nproperty.llap.daemon.log.maxfilesize = {{hive_llap_log_maxfilesize}}MB\nproperty.llap.daemon.log.maxbackupindex = {{hive_llap_log_maxbackupindex}}\n\n# list of all appenders\nappenders = console, RFA, HISTORYAPPENDER, query-routing\n\n# console appender\nappender.console.type = Console\nappender.console.name = console\nappender.console.target = SYSTEM_ERR\nappender.console.layout.type = PatternLayout\nappender.console.layout.pattern = %d{ISO8601} %5p [%t (%X{fragmentId})] %c{2}: %m%n\n\n# rolling file appender\nappender.RFA.type = RollingRandomAccessFile\nappender.RFA.name = RFA\nappender.RFA.fileName = ${sys:llap.daemon.log.dir}/${sys:llap.daemon.log.file}\nappender.RFA.filePattern = ${sys:llap.daemon.log.dir}/${sys:llap.daemon.log.file}_%d{yyyy-MM-dd-HH}_%i.done\nappender.RFA.layout.type = PatternLayout\nappender.RFA.layout.pattern = %d{ISO8601} %-5p [%t (%X{fragmentId})] %c: %m%n\nappender.RFA.policies.type = Policies\nappender.RFA.policies.time.type = TimeBasedTriggeringPolicy\nappender.RFA.policies.time.interval = 1\nappender.RFA.policies.time.modulate = true\nappender.RFA.policies.size.type = SizeBasedTriggeringPolicy\nappender.RFA.policies.size.size = ${sys:llap.daemon.log.maxfilesize}\nappender.RFA.strategy.type = DefaultRolloverStrategy\nappender.RFA.strategy.max = ${sys:llap.daemon.log.maxbackupindex}\n\n# history file appender\nappender.HISTORYAPPENDER.type = RollingRandomAccessFile\nappender.HISTORYAPPENDER.name = HISTORYAPPENDER\nappender.HISTORYAPPENDER.fileName = ${sys:llap.daemon.log.dir}/${sys:llap.daemon.historylog.file}\nappender.HISTORYAPPENDER.filePattern = ${sys:llap.daemon.log.dir}/${sys:llap.daemon.historylog.file}_%d{yyyy-MM-dd}_%i.done\nappender.HISTORYAPPENDER.layout.type = PatternLayout\nappender.HISTORYAPPENDER.layout.pattern = %m%n\nappender.HISTORYAPPENDER.policies.type = Policies\nappender.HISTORYAPPENDER.policies.size.type = SizeBasedTriggeringPolicy\nappender.HISTORYAPPENDER.policies.size.size = ${sys:llap.daemon.log.maxfilesize}\nappender.HISTORYAPPENDER.policies.time.type = TimeBasedTriggeringPolicy\nappender.HISTORYAPPENDER.policies.time.interval = 1\nappender.HISTORYAPPENDER.policies.time.modulate = true\nappender.HISTORYAPPENDER.strategy.type = DefaultRolloverStrategy\nappender.HISTORYAPPENDER.strategy.max = ${sys:llap.daemon.log.maxbackupindex}\n\n# queryId based routing file appender\nappender.query-routing.type = Routing\nappender.query-routing.name = query-routing\nappender.query-routing.routes.type = Routes\nappender.query-routing.routes.pattern = $${ctx:queryId}\n#Purge polciy for query-based Routing Appender\nappender.query-routing.purgePolicy.type = LlapRoutingAppenderPurgePolicy\n# Note: Do not change this name without changing the corresponding entry in LlapConstants\nappender.query-routing.purgePolicy.name = llapLogPurgerQueryRouting\n# default route\nappender.query-routing.routes.route-default.type = Route\nappender.query-routing.routes.route-default.key = $${ctx:queryId}\nappender.query-routing.routes.route-default.ref = RFA\n# queryId based route\nappender.query-routing.routes.route-mdc.type = Route\nappender.query-routing.routes.route-mdc.file-mdc.type = LlapWrappedAppender\nappender.query-routing.routes.route-mdc.file-mdc.name = IrrelevantName-query-routing\nappender.query-routing.routes.route-mdc.file-mdc.app.type = RandomAccessFile\nappender.query-routing.routes.route-mdc.file-mdc.app.name = file-mdc\nappender.query-routing.routes.route-mdc.file-mdc.app.fileName = ${sys:llap.daemon.log.dir}/${ctx:queryId}-${ctx:dagId}.log\nappender.query-routing.routes.route-mdc.file-mdc.app.layout.type = PatternLayout\nappender.query-routing.routes.route-mdc.file-mdc.app.layout.pattern = %d{ISO8601} %5p [%t (%X{fragmentId})] %c{2}: %m%n\n\n# list of all loggers\nloggers = PerfLogger, EncodedReader, NIOServerCnxn, ClientCnxnSocketNIO, DataNucleus, Datastore, JPOX, HistoryLogger, LlapIoImpl, LlapIoOrc, LlapIoCache, LlapIoLocking, TezSM, TezSS, TezHC\n\nlogger.TezSM.name = org.apache.tez.runtime.library.common.shuffle.impl.ShuffleManager.fetch\nlogger.TezSM.level = WARN\nlogger.TezSS.name = org.apache.tez.runtime.library.common.shuffle.orderedgrouped.ShuffleScheduler.fetch\nlogger.TezSS.level = WARN\nlogger.TezHC.name = org.apache.tez.http.HttpConnection.url\nlogger.TezHC.level = WARN\n\nlogger.PerfLogger.name = org.apache.hadoop.hive.ql.log.PerfLogger\nlogger.PerfLogger.level = DEBUG\n\nlogger.EncodedReader.name = org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl\nlogger.EncodedReader.level = INFO\n\nlogger.LlapIoImpl.name = LlapIoImpl\nlogger.LlapIoImpl.level = INFO\n\nlogger.LlapIoOrc.name = LlapIoOrc\nlogger.LlapIoOrc.level = WARN\n\nlogger.LlapIoCache.name = LlapIoCache\nlogger.LlapIoCache.level = WARN\n\nlogger.LlapIoLocking.name = LlapIoLocking\nlogger.LlapIoLocking.level = WARN\n\nlogger.NIOServerCnxn.name = org.apache.zookeeper.server.NIOServerCnxn\nlogger.NIOServerCnxn.level = WARN\n\nlogger.ClientCnxnSocketNIO.name = org.apache.zookeeper.ClientCnxnSocketNIO\nlogger.ClientCnxnSocketNIO.level = WARN\n\nlogger.DataNucleus.name = DataNucleus\nlogger.DataNucleus.level = ERROR\n\nlogger.Datastore.name = Datastore\nlogger.Datastore.level = ERROR\n\nlogger.JPOX.name = JPOX\nlogger.JPOX.level = ERROR\n\nlogger.HistoryLogger.name = org.apache.hadoop.hive.llap.daemon.HistoryLogger\nlogger.HistoryLogger.level = INFO\nlogger.HistoryLogger.additivity = false\nlogger.HistoryLogger.appenderRefs = HistoryAppender\nlogger.HistoryLogger.appenderRef.HistoryAppender.ref = HISTORYAPPENDER\n\n# root logger\nrootLogger.level = ${sys:llap.daemon.log.level}\nrootLogger.appenderRefs = root\nrootLogger.appenderRef.root.ref = ${sys:llap.daemon.root.logger}', u'hive_llap_log_maxbackupindex': u'240', u'hive_llap_log_maxfilesize': u'256'}, u'yarn-log4j': {u'content': u'\n#Relative to Yarn Log Dir Prefix\nyarn.log.dir=.\n#\n# Job Summary Appender\n#\n# Use following logger to send summary to separate file defined by\n# hadoop.mapreduce.jobsummary.log.file rolled daily:\n# hadoop.mapreduce.jobsummary.logger=INFO,JSA\n#\nhadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}\nhadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log\nlog4j.appender.JSA=org.apache.log4j.DailyRollingFileAppender\n# Set the ResourceManager summary log filename\nyarn.server.resourcemanager.appsummary.log.file=hadoop-mapreduce.jobsummary.log\n# Set the ResourceManager summary log level and appender\nyarn.server.resourcemanager.appsummary.logger=${hadoop.root.logger}\n#yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY\n\n# To enable AppSummaryLogging for the RM,\n# set yarn.server.resourcemanager.appsummary.logger to\n# LEVEL,RMSUMMARY in hadoop-env.sh\n\n# Appender for ResourceManager Application Summary Log\n# Requires the following properties to be set\n# - hadoop.log.dir (Hadoop Log directory)\n# - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename)\n# - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender)\nlog4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender\nlog4j.appender.RMSUMMARY.File=${yarn.log.dir}/${yarn.server.resourcemanager.appsummary.log.file}\nlog4j.appender.RMSUMMARY.MaxFileSize={{yarn_rm_summary_log_max_backup_size}}MB\nlog4j.appender.RMSUMMARY.MaxBackupIndex={{yarn_rm_summary_log_number_of_backup_files}}\nlog4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n\nlog4j.appender.JSA.layout=org.apache.log4j.PatternLayout\nlog4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n\nlog4j.appender.JSA.DatePattern=.yyyy-MM-dd\nlog4j.appender.JSA.layout=org.apache.log4j.PatternLayout\nlog4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}\nlog4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false\n\n# Appender for viewing information for errors and warnings\nyarn.ewma.cleanupInterval=300\nyarn.ewma.messageAgeLimitSeconds=86400\nyarn.ewma.maxUniqueMessages=250\nlog4j.appender.EWMA=org.apache.hadoop.yarn.util.Log4jWarningErrorMetricsAppender\nlog4j.appender.EWMA.cleanupInterval=${yarn.ewma.cleanupInterval}\nlog4j.appender.EWMA.messageAgeLimitSeconds=${yarn.ewma.messageAgeLimitSeconds}\nlog4j.appender.EWMA.maxUniqueMessages=${yarn.ewma.maxUniqueMessages}\n\n# Audit logging for ResourceManager\nrm.audit.logger=${hadoop.root.logger}\nlog4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger=${rm.audit.logger}\nlog4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger=false\nlog4j.appender.RMAUDIT=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.RMAUDIT.File=${yarn.log.dir}/rm-audit.log\nlog4j.appender.RMAUDIT.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RMAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n\nlog4j.appender.RMAUDIT.DatePattern=.yyyy-MM-dd\n\n# Audit logging for NodeManager\nnm.audit.logger=${hadoop.root.logger}\nlog4j.logger.org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger=${nm.audit.logger}\nlog4j.additivity.org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger=false\nlog4j.appender.NMAUDIT=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.NMAUDIT.File=${yarn.log.dir}/nm-audit.log\nlog4j.appender.NMAUDIT.layout=org.apache.log4j.PatternLayout\nlog4j.appender.NMAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n\nlog4j.appender.NMAUDIT.DatePattern=.yyyy-MM-dd', u'yarn_rm_summary_log_max_backup_size': u'256', u'yarn_rm_summary_log_number_of_backup_files': u'20'}, u'activity-zeppelin-env': {u'activity-zeppelin-env-content': u'#!/bin/bash\n\n# Copyright (c) 2011-2018, Hortonworks Inc. All rights reserved.\n# Except as expressly permitted in a written agreement between you\n# or your company and Hortonworks, Inc, any use, reproduction,\n# modification, redistribution, sharing, lending or other exploitation\n# of all or any part of the contents of this file is strictly prohibited.\n\n\nexport JAVA_HOME={{java_home}}\n# Additional jvm options. for example, export ZEPPELIN_JAVA_OPTS="-Dspark.executor.memory=8g -Dspark.cores.max=16"\nexport ZEPPELIN_JAVA_OPTS="-Dhdp.version={{hdp_version}} -Dlog.file.name=activity-explorer.log -DSmartSenseActivityExplorer" \t\t\n# export ZEPPELIN_INTP_MEM \t\t# zeppelin interpreter process jvm mem options. Default = ZEPPELIN_MEM\n# export ZEPPELIN_INTP_JAVA_OPTS \t\t# zeppelin interpreter process jvm options. Default = ZEPPELIN_JAVA_OPTS\n\nexport RUN_AS_USER={{run_as_user}}\nexport ZEPPELIN_MEM=" -Xms256m -Xmx1024m -XX:MaxPermSize=256m"\nexport ZEPPELIN_LOG_DIR={{activity_log_dir}}\nexport ZEPPELIN_PID_DIR=/var/run/smartsense-activity-explorer\nexport ZEPPELIN_WAR_TEMPDIR=/var/lib/smartsense/activity-explorer/webapp\nexport ZEPPELIN_NOTEBOOK_DIR=/var/lib/smartsense/activity-explorer/notebook\nexport ZEPPELIN_CLASSPATH="/etc/ams-hbase/conf:${ZEPPELIN_CLASSPATH}"\nexport CLASSPATH=${ZEPPELIN_CLASSPATH}'}}}}} INFO 2019-04-30 09:48:27,118 ClusterCache.py:125 - Rewriting cache ClusterConfigurationCache for cluster 2 INFO 2019-04-30 09:48:27,150 security.py:135 - Event to server at /agents/host_level_params (correlation_id=4): {'hash': '2c083ec90869e40f20801e1d5435b32aebd7c4b9429d5817fdd7c829d32db36470be7f65317400e6ddb17e054337f970bb27c359758c21c7005be43f81a968d0'} INFO 2019-04-30 09:48:27,152 __init__.py:57 - Event from server at /user/ (correlation_id=4): {u'clusters': {}, u'hash': u'eb07592a169449fbe68562d6d84a0516c45476c4bff3c6709fe30104f8cdf7a162cf8059e310180490a8cd109762288d746a37e59311930d503131d08fb0bfb7'} INFO 2019-04-30 09:48:27,155 security.py:135 - Event to server at /agents/alert_definitions (correlation_id=5): {'hash': '631529ddbef4fe8b1c007b0ddb135266ced5bdef039cec8dcef239aaf3c55f6363f3861a318c04abc1daf4483a0904924d8596fdf5374530d9efab5c50ff971f'} INFO 2019-04-30 09:48:27,157 __init__.py:57 - Event from server at /user/ (correlation_id=5): {u'clusters': {}, u'hostName': u'{clustername}hdpslave02', u'hash': u'05da0d08e560e3abb6f2af7edb672662f2782d0caa1ddcb30d4e171d15d0e3e776db902ea89de7e7c1c03499d93f78338f180e8c64f7769c8ffeaf70869cb262', u'eventType': u'CREATE'} INFO 2019-04-30 09:48:27,159 AlertSchedulerHandler.py:212 - [AlertScheduler] Rescheduling all jobs... INFO 2019-04-30 09:48:27,160 AlertSchedulerHandler.py:233 - [AlertScheduler] Reschedule Summary: 0 unscheduled, 0 rescheduled INFO 2019-04-30 09:48:27,160 security.py:135 - Event to server at /heartbeat (correlation_id=6): {'id': 0} INFO 2019-04-30 09:48:27,164 __init__.py:57 - Event from server at /user/ (correlation_id=6): {u'status': u'OK', u'id': 1} INFO 2019-04-30 09:48:37,168 security.py:135 - Event to server at /heartbeat (correlation_id=7): {'id': 1} INFO 2019-04-30 09:48:37,170 __init__.py:57 - Event from server at /user/ (correlation_id=7): {u'status': u'OK', u'id': 2}