Member since
07-06-2017
9
Posts
0
Kudos Received
0
Solutions
01-05-2018
02:38 AM
We had a HDF Cluster with customized settings to route all the audit log to file system Long term plan is to feed the logs to central platform, but currently these logs are sitting at remote site, and the normal kafka message reading audit log is filling up the server disk very quickly. -rw-r--r-- 1 kafka hadoop 20067988960 Jan 3 16:28 ranger_kafka_audit.log
# see the difference of the size after a few min. (almost 2G in 7 min)
-rw-r--r-- 1 kafka hadoop 18363271828 Jan 3 16:21 ranger_kafka_audit.log
We are looking for a temp solution to only write the audit log with "forbidden" tag to the file. Does anyone have idea of customize the configuration so that we can control the content to be logged? # Turn on ranger kafka audit log
log4j.appender.RANGER_AUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.RANGER_AUDIT.File=${kafka.logs.dir}/ranger_kafka_audit.log
log4j.appender.RANGER_AUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RANGER_AUDIT.layout.ConversionPattern=%m%n
log4j.appender.RANGER_AUDIT.MaxFileSize=100MB
log4j.appender.R.MaxBackupIndex=30
log4j.logger.ranger.audit=INFO,RANGER_AUDIT
... View more
Labels:
- Labels:
-
Apache Ranger
07-28-2017
07:40 AM
Hi, Jay, thanks for your reply. Problem solved using curl and change the payload a little bit. I followed the instruction documented at, https://cwiki.apache.org/confluence/display/AMBARI/Blueprints And I noticed the difference when I switching the tools. When using Postman, I have to add ":" when trigger the GET request to get blueprint, so my url is like http://testambarihost:8080/api/v1/clusters/:mycluster?format=blueprint And if I change the url to http://testambarihost:8080/api/v1/clusters/mycluster?format=blueprint I will get below exception { "status" : 404, "message" : "The requested resource doesn't exist: Cluster not found, clusterName=mycluster" } When using curl, no ":" is needed, and the post request you've mentioned works. BTW, another modification I found required, the response body I get from step one, I need to manually remove the top level wrapper which wraps the blueprint as array items, after removing the top level wrapper of the body, I can post without issue.
... View more
07-28-2017
06:51 AM
Hi, I am just trying to automate the deployment using blueprints. Step 1, I am exporting blueprint of my existing cluster by invoking below url HTTP Get http://<ambariserverhost>:8080/api/v1/clusters/:<myclusterName>?format=blueprint Using basic authentication with ambari console admin user and password Step 2, I am trying the register this exported blueprint to ambari. The url I try to post is, http://<ambariserverhost>:8080/api/v1/blueprints/:testblueprint And then I got the exception of "405 Method Not Allowed" What could be the possible reason? I also tried to post the same body to another ambari which the cluster has not been created, same exception. (because I want to confirm that it's not because when I tried to register the blue print, I've already got an existing cluster).
... View more
Labels:
- Labels:
-
Apache Ambari
07-06-2017
09:56 AM
@Jay SenSharma Thanks very much Jay, you are right. Because the steps changed a little bit compared to the previous version. I was wondering why I do not need to upgrade the mpack for this version (thought it was merged in ambari-server upgrade step). Then following your reply I found this step is mandatory but in the seperate link before even backup step. After re-run mpack upgrade step everything works fine for me.
... View more
07-06-2017
09:28 AM
Hi, I am having below issue while upgrade HDF2.1.2 to HDF3.0 The issue happens when I am following the instruction as, https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.0/bk_ambari-upgrade/content/ambari_upgrade_guide.html And at the step of ambari-server upgrade The exception message is (by the way, also met the bug 82311, and 82561, and have tried to workaround following the release instruction. INFO: Upgrading database schema INFO: Return code from schema upgrade command, retcode = 1 ERROR: Error executing schema upgrade, please check the server logs. ERROR: Error output from schema upgrade command: ERROR: Exception in thread "main" org.apache.ambari.server.AmbariException: Guice provision errors: 1) Error injecting constructor, org.apache.ambari.server.AmbariException: Unable to parse stack upgrade file at location: /var/lib/ambari-server/resources/stacks/HDF/2.0/upgrades/nonrolling-upgrade-2.0.xml at org.apache.ambari.server.stack.StackManager.<init>(StackManager.java:144) while locating org.apache.ambari.server.stack.StackManager annotated with interface com.google.inject.assistedinject.Assisted at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:261) at org.apache.ambari.server.api.services.AmbariMetaInfo.class(AmbariMetaInfo.java:135)
and, [org.xml.sax.SAXParseException; systemId: file:/var/lib/ambari-server/resources/stacks/HDF/2.0/upgrades/nonrolling-upgrade-2.0.xml; lineNumber: 24; columnNumber: 22; cvc-complex-type.2.4.a: Invalid content was found starting with element 'downgrade-allowed'. One of '{upgrade-path, order}' is expected.] at javax.xml.bind.helpers.AbstractUnmarshallerImpl.createUnmarshalException(AbstractUnmarshallerImpl.java:335) I've checked the content of /var/lib/ambari-server/resources/stacks/HDF/2.0/upgrades/nonrolling-upgrade-2.0.xml, lineNumber:24 <downgrade-allowed>false</downgrade-allowed> btw, attached the file for troubleshooting. nonrolling-upgrade-20.xmlnonrolling-upgrade-20.xml
... View more
Labels:
- Labels:
-
Apache Ambari