Member since
04-19-2018
21
Posts
0
Kudos Received
0
Solutions
07-08-2019
02:26 AM
The above question and the entire response thread below was originally posted in the Community Help track. On Mon Jul 8 02:24 UTC 2019, a member of the HCC moderation staff moved it to the Data Science & Advanced Analytics track. The Community Help Track is intended for questions about using the HCC site itself, not technical questions about administering Spark2.
... View more
06-11-2019
04:19 AM
@John Are you still getting the same error? Did it resolve the issue?
... View more
04-13-2019
07:39 AM
@Nikhil Belure You will need to adjust the value of AMS heap size When memstore are being forced to flush to make room in memory.keep flushing until we hit this mark. Defaults to 35% of heap. The value equals to the hbase.regionserver.global.memstore.upperLimit causes the minimum possible flushing to occur when updates are blocked to memstore limiting hbase.regionserver.global.memstore.lowerLimit = 0.3 Maximum Size of all memstores in the region server before new updates are blocked and flushes are forces. This defaults to 40% of the heap hbase.regionserver.global.memstore.upperLimit = 0.35 Maximum Size of all memstores in the region server before new updates are blocked and flushes are forces. This defaults to 40% of the heap So what is the current size of your Metrics Collector Heap Size? With the above setup with a cluster size of <20 nodes Set in the Advanced ams-env the value of Metrics Collector Heap Size = 1024 that should work. Please can you use this as a reference to tune your AMS https://cwiki.apache.org/confluence/display/AMBARI/Configurations+-+Tuning Hope that helps
... View more
04-03-2019
11:37 PM
@Nikhil Belure You can use either: NiFi for this case by using List+Fetch File[Sftp] processors and use PutHDFS processor (or) Try using hadoop distcp to copy local files into HDFS as described in this thread. (or) If your directory have a lot of files in it then it would be much more faster if you tar (or) zip the files and then run copyFromLocal command.
... View more
04-16-2019
03:54 PM
i did following changes,but hdfs-audit logs are not rotating, hdfs.audit.logger=INFO,console log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false #log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender log4j.appender.DRFAAUDIT=org.apache.log4j.RollingFileAppender log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd log4j.appender.DRFAAUDIT.MaxFileSize=100MB log4j.appender.DRFAAUDIT.MaxBackupIndex=5
... View more