Created 06-01-2018 09:59 AM
Hi, I'm running an HDP 2.6.4 Sandbox on a Virtualbox VM, and I've set up log4j.properties for spark-sql and hive. I've changed logging from INFO to WARN in both hive and spark-sql and saved the files in the /etc/hive/ and /etc/spark directories as per default setting. However, when I rebooted the VM, all the settings had gone back to default.
Any idea how to make the config settings permanent?
Created 06-01-2018 10:07 AM
For Ambari Managed cluster you are not supposed to edit the files manually on the filesystem like "/etc/hive/conf/" or "/etc/spark/conf/"
This is because when we restart those services from Ambari UI (API Calls) Or reboot the VM (which will cause those service restart) then ambari will reset the content of those files from the configuration that is stored on Ambari DB. So you should always make the changes via Ambari wither using Ambari UI or using Ambari API calls (or using config.py) script.
Created 06-01-2018 10:07 AM
For Ambari Managed cluster you are not supposed to edit the files manually on the filesystem like "/etc/hive/conf/" or "/etc/spark/conf/"
This is because when we restart those services from Ambari UI (API Calls) Or reboot the VM (which will cause those service restart) then ambari will reset the content of those files from the configuration that is stored on Ambari DB. So you should always make the changes via Ambari wither using Ambari UI or using Ambari API calls (or using config.py) script.
Created 06-01-2018 04:19 PM
Thanks for the help Jay, I've gone and logged into Ambari as "admin" instead of "maria_dev" (I had to root into the host VM and reset ambari password first) and made the changes from there. The default "maria_dev" profile doesn't have admin rights so I couldn't make any changes in the sandbox initially, hence the reason I had gone into /etc folder. You're right, accessing Ambari as admin makes it unnecessary to go into file structure, and it stores a log of changes made to all settings, which is an added bonus.