Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2459 | 04-27-2020 03:48 AM | |
4896 | 04-26-2020 06:18 PM | |
3984 | 04-26-2020 06:05 PM | |
3227 | 04-13-2020 08:53 PM | |
4939 | 03-31-2020 02:10 AM |
01-04-2019
05:58 AM
2 Kudos
@Nitin Suradkar If you have disk space issues then you can delete the old hdfs-audit.log. However for autiting purpose (like who did what on hdfs) this log is useful. If you want to set a particular size limit for your hdfs-audit log then the best option will be to change the Logger named "DRFAAUDIT" isnide "Advanced hdfs-log4j" section of ambari to "RollingFileAppender" instead if using the default "DailyRollingFileAppender" Ambari UI --> HDFS --> Config --> Advanced --> Advanced hdfs-log4j Edit the hdfs-log4j template section containing "DRFAAUDIT" appender definition to somethng like folowing: hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.DRFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.DRFAAUDIT.MaxFileSize=500MB
log4j.appender.DRFAAUDIT.MaxBackupIndex=20 Restart HDFS. . . As an alternate approach you can refer to the following article to get the hdfs audit logs automatically gets compressed: How to enable HDFS Audit log rotation and zipping for logs? https://community.hortonworks.com/content/supportkb/150088/how-to-enable-hdfs-audit-log-rotation-and-zipping.html .
... View more
01-04-2019
03:08 AM
1 Kudo
@Leonardo
Araujo
Which Sandbox Are you using? (HDP / HDF). What is the version of your Sandbox ? If you want to use 3.x series of Sandbox with kafka the i think preferred will be the following sandbox: For HDF Sandbox 3.1 provided components please refer to: https://hortonworks.com/tutorial/hortonworks-sandbox-guide/section/2/#services-started-automatically . For HDP Sandbox 3 Provided Service List you can refer to: https://hortonworks.com/tutorial/hortonworks-sandbox-guide/section/1/#hdp-services-started-automatically-on-startup . Kafka Service in not started automatically. Can you please share the screenshot where you do not see "Kafka" service in Ambari UI ? Do you see Kafka Directory inside the following folder when you SSH to the Sandbox using SSH port 2222 or using web client: http://localhost:4200 # ls -l /etc/kafka/conf In ambari UI when you click on the "Add Service" button then in the next page do you see "Kafka" service is checked?
... View more
01-03-2019
11:30 PM
@Michael Bronson What is the value set for "yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage" property for your yarn? -> This is the maximum percentage of disk space utilization allowed after which a disk is marked as bad. Values can range from 0.0 to 100.0. If the value is greater than or equal to 100, the nodemanager will check for full disk. This applies to yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. The default value is 90.0%. Hence either clean up the disk that the unhealthy node is running on, or increase the threshold in yarn-site.xml Ambari --> YARN -> Configs -> Advanced yarn-site -> Check "yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage" . "yarn.nodemanager.log-dirs": It is always best to make sure dedicated disk is allocated you can check the path of the property "yarn.nodemanager.log-dirs" and move it to dedicated disk where enough space is available. This property Determines where the container-logs are stored on the node when the containers are running. Also please check the property "yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb" value. It's default value is 0. The minimum space that must be available on a disk for it to be used. This applies to yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs.
... View more
01-03-2019
08:06 AM
@Harish More Can you please check if you have set the "ranger.usersync.unix.backend=nss" property inside the "ranger-ugsync-site.xml" file after setting up the SSSD? Please refer to the following article to get more details about this property followed by a usersync restart. https://community.hortonworks.com/content/supportkb/222504/unable-to-sync-ranger-user-from-local-unix-nss.html
... View more
01-03-2019
12:33 AM
1 Kudo
@Dan Dan Can you check the host by running the NetCat comant to findout if the HDFS user is able to bind to port 50070 where you are getting the "java.net.BindException: Port in use: ec2-34-211-154-113.us-west-2.compute.amazonaws.com:50070" error? You can use netcat command to attempt binding port 50070 using "hdfs" user as following: (If any other process is using that port then you should see "Address already in use" message in the output. # su - hdfs
# nc -l ec2-34-211-154-113.us-west-2.compute.amazonaws.com 50070 Check the output of the above command to see if you get any message saying "Address already in use. QUITTING." ? (if yes then find the PID of thast process and then kill it)
... View more
01-02-2019
10:38 PM
@ChanHyuk Park As i mentioned in my previous update you should have your RedHat Base repo setup on your VM where you are trying to install the "redhat-lsb" package because that package is available in redhat base repo. I currently do not have redhat6 VM available with me however you should see some repo file inside Following example is for CentOS (noit for redhat) # ls -l /etc/yum.repos.d/CentOS-Base.repo
# less /etc/yum.repos.d/CentOS-Base.repo . So please consult with your OS admin or cloud admin to get the missing RedHat Base Repo installed on your host so that you can install "redhat-lsb" packages/ .
... View more
01-02-2019
09:23 AM
@ChanHyuk Park Hadoop package has a dependency on "redhat-lsb" you can find it by running the following command: # repoquery --requires 'hadoop_2_5_3_0_37-2.7.3.2.5.3.0-37.el6.x86_64' > /tmp/dependencies.txt
# grep 'redhat-lsb' /tmp/dependencies.txt
. So please make sure that your Amazon Linux2 instance has access to redhat repo and the package redhat-lsb is installed. Usually the mentioned package "redhat-lsb-core" should come from the redhat base repo: Example: # yum whatprovides redhat-lsb
# yum install redhat-lsb redhat-lsb-core .
... View more
01-02-2019
09:10 AM
@Michael Bronson We can find the AMS collector Mode by looking at the "ams-site.xml" file. # grep -A1 'mode' /etc/ambari-metrics-collector/conf/ams-site.xml
<name>timeline.metrics.service.operation.mode</name>
<value>embedded</value><br> If AMS is running in Embedded mode and you are keep getting the same Zookeeper error then better to try increasing the AMS collector Heap size to 2GB or more (like 4GB) and then try starting it again. Please try increasing the heap setting for HBase Master Heapsize as well. For AMS tuning it is best to refer to the following article. https://community.hortonworks.com/content/supportkb/208353/troubleshooting-ambari-metrics-ams.html
... View more
01-02-2019
08:29 AM
@Michael Bronson Your
current Thread query looks very similar to the other HCC thread opened
by you: https://community.hortonworks.com/questions/231177/metrics-failed-on-orgapachehadoophbasezookeeperzoo.html?childToView=232247#answer-232247 . Can you please mark one of the HCC thread as Closed so that all the hcc users can respond to the same single thread.
... View more
01-02-2019
08:19 AM
@Michael Bronson Your current Thread query looks very similar to the other HCC thread opened by you: https://community.hortonworks.com/questions/231176/metrics-collector-ams-hbase-unsecure-received-unex.html?childToView=232246#answer-232246 . Can you please mark one of the HCC thread as Closed so that all the hcc users can respond to the same single thread.
... View more