Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2720 | 04-27-2020 03:48 AM | |
| 5280 | 04-26-2020 06:18 PM | |
| 4445 | 04-26-2020 06:05 PM | |
| 3570 | 04-13-2020 08:53 PM | |
| 5377 | 03-31-2020 02:10 AM |
08-23-2018
08:32 AM
@M
Ax
Unfortunatly those are few views which are not supported with Ambari 2.7 anymore and hence removed. https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/bk_ambari-upgrade/content/bhvr_changes_upgrade_hdp3_amb27.html
... View more
08-22-2018
08:02 AM
@Zyann You will need to make sure that you imported the HDFS certificates to ambari server truststore. You can refer to the following doc to know more about it: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-security/content/set_up_truststore_for_ambari_server.html Choose Option [4] to setup truststore and then next time choose option [5] to import HDFS certification to ambari truststore. # ambari-server setup-security
.
.
[4] Setup truststore.
[5] Import certificate to truststore. .
... View more
08-22-2018
12:42 AM
1 Kudo
@Deepak SANAGAPALLI Also as you are securing your Nifi hence can you also check the following https property "nifi.web.https.host" is setup # grep 'nifi.web.https.host' /etc/nifi/conf/nifi.properties
(OR List all web properties as following)
# grep 'nifi.web.htt' /etc/nifi/conf/nifi.properties .
... View more
08-22-2018
12:28 AM
@Deepak SANAGAPALLI We see the following error 2018-08-21 16:22:24,529 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
java.net.SocketException: Unresolved address
at sun.nio.ch.Net.translateToSocketException(Net.java:131)
at sun.nio.ch.Net.translateException(Net.java:157)
at sun.nio.ch.Net.translateException(Net.java:163)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:76)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:298)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:431)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:777)
at org.apache.nifi.NiFi.<init>(NiFi.java:160)
at org.apache.nifi.NiFi.main(NiFi.java:268)
Caused by: java.nio.channels.UnresolvedAddressException: null
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:218)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
... 9 common frames omitted NiFi internally starts a Jetty web container. Here the above error indicates that in your configuration you might not have specified the Address for NiFi and hence we see "Unresolved Address" Can you please share your NiFi config so that we can see why the Address is unresolved. Also please share the output of the following command to verify if your NiFi Host has a valid hostname. # hostname -f
# cat /etc/hosts
# grep 'nifi.cluster.node.address' /etc/nifi/conf/nifi.properties
# grep 'nifi.web.http.host' /etc/nifi/conf/nifi.properties
.
... View more
08-21-2018
10:56 PM
@yadir Aguilar For a single node cluster setup with Ambari you will atleast 8 GB RAM so that you can run couple of services (like Yarn/HDFS/MR/Hive...etc) along with the Ambari Server and Agent. Disk space 10-20 GB should be sufficient. For Software requirement like Python/Openssl/Operating System details you can get from : https://supportmatrix.hortonworks.com If you just want to get going with HDP/HDF then HDP Sandbox will be best option: https://hortonworks.com/tutorial/learning-the-ropes-of-the-hortonworks-sandbox/ If you wan to see how a Single Node Cluster installation is done in HDP sandbox (which is a single node cluster setup) then you can take a look at HDP Sandbox
... View more
08-21-2018
09:57 PM
3 Kudos
@yadir Aguilar Ambari Server is a Java based program which is responsible for managing cluster nodes/ services/components and to monitor them. Ambari Server also provides means to view the cluster details via UI and using API calls. It runs on Jetty and also uses Pythos/Shell scripts for different purposes. Ambari server stores the cluster details and configs inside the Database (by default uses Postgres in Embedded Mode) On the other hand Ambari Agent is a python script which runs on the Nodes which needs to be part of cluster. So If you want to monitor and manage resources / components on a host then you will need to install Agents on that host. Ambari agents are also responsible to running the scheduled alert scripts to monitor the health of various resources. The Ambari Server collects data from across your cluster. Each host has a copy of the Ambari Agent, which allows the Ambari Server to control each host. So it is good to have agent installed on ambari server host as well (But not mandatory) .
... View more
08-21-2018
01:45 AM
@Laura
Orcutt
Good to know that it worked for you. Can you please click the "Accept"
button on the correct answer so that this HCC thread can be marked as
"Answered" that way it will be helpful for other HCC users to quickly
find/browse the correct answers.
... View more
08-20-2018
03:16 AM
@Laura
Orcutt
SSH password will be "hadoop". The SSH port 2222 is designed to connect to the actual HDP container (which is dockerized) # ssh root@127.0.0.1 -p 2222
Enter Password: hadoop . If you want to do SSH login to the Sandbox using web browser then you can also use the link: http://locahost:4200 Enter Password: hadoop .
... View more
08-19-2018
10:40 AM
1 Kudo
Sometimes it is desired to have the logs rotated as well as compressed. We can use log4j extras in order to achieve the same. For processes like NameNode / DataNode...etc we can use the approach described in the article. https://community.hortonworks.com/articles/50058/using-log4j-extras-how-to-rotate-as-well-as-zip-th.html However when we try to use the same approach in Ambari 2.6 for ambari metrics collector log compression and rotation then it will not work and we might see some warnings / errors inside the "" something like following: log4j:WARN Failed to set property [triggeringPolicy] to value "org.apache.log4j.rolling.SizeBasedTriggeringPolicy".
log4j:WARN Failed to set property [rollingPolicy] to value "org.apache.log4j.rolling.FixedWindowRollingPolicy".
log4j:WARN Please set a rolling policy for the RollingFileAppender named 'file'
log4j:ERROR No output stream or file set for the appender named [file].
(OR)
log4j:ERROR A "org.apache.log4j.rolling.SizeBasedTriggeringPolicy" object is not
assignable to a "org.apache.log4j.rolling.RollingPolicy" variable.
log4j:ERROR The class "org.apache.log4j.rolling.RollingPolicy" was loaded by
log4j:ERROR [sun.misc.Launcher$AppClassLoader@2328c243] whereas object of type
log4j:ERROR "org.apache.log4j.rolling.SizeBasedTriggeringPolicy" was loaded by [sun.misc.Launcher$AppClassLoader@2328c243]. . This is because we see that there is a b ug reported as https://bz.apache.org/bugzilla/show_bug.cgi?id=36384. which says that in some older version of log4j these rolling policies were not configurable via log4j.properties (those were only configurable via log4j.xml) This bug added a feature in log4j to achieve "Configuring triggering/rolling policies should be supported through properties" hence you will need to make sure that you are using the log4j JAR of version "log4j-1.2.17.jar" (instead of using the "log4j-1.2.15.jar") Hence if users wants to use the rotation and zipping feature of log4j then make sure that your AMS collector is not using old version of log4j. This article just describes a workaround hence follow this suggestion at your own risk because here we are going to change the default log4j jar shipped with AMS collector lib. # mv /usr/lib/ambari-metrics-collector/log4j-1.2.15.jar /tmp/
# cp -f /usr/lib/ams-hbase/lib/log4j-1.2.17.jar /usr/lib/ambari-metrics-collector/ . Now also make sure to copy the "log4j-extras-1.2.17.jar" on the ambari metrics collector host which provides the various log rotation policies. # mkdir /tmp/log4j_extras
# curl http://apache.mirrors.tds.net/logging/log4j/extras/1.2.17/apache-log4j-extras-1.2.17-bin.zip -o /tmp/log4j_extras/apache-log4j-extras-1.2.17-bin.zip
# cd /tmp/log4j_extras
# unzip apache-log4j-extras-1.2.17-bin.zip
# cp -f /tmp/log4j_extras/apache-log4j-extras-1.2.17/apache-log4j-extras-1.2.17.jar /usr/lib/ambari-metrics-collector/ . Users need to also edit the "ams-log4j" via ambari to add the customized appender. Ambari UI --> Ambari Metrics --> Configs --> Advanced --> "Advanced ams-log4j" --> ams-log4j template (text area) OLD default Value (please comment out the following) # Direct log messages to a log file
#log4j.appender.file=org.apache.log4j.RollingFileAppender
#log4j.appender.file.File=${ams.log.dir}/${ams.log.file}
#log4j.appender.file.MaxFileSize={{ams_log_max_backup_size}}MB
#log4j.appender.file.MaxBackupIndex={{ams_log_number_of_backup_files}}
#log4j.appender.file.layout=org.apache.log4j.PatternLayout
#log4j.appender.file.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n . New Appender Config log4j.appender.file=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.file.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
log4j.appender.file.rollingPolicy.maxIndex={{ams_log_number_of_backup_files}}
log4j.appender.file.rollingPolicy.ActiveFileName=${ams.log.dir}/${ams.log.file}
log4j.appender.file.rollingPolicy.FileNamePattern=${ams.log.dir}/${ams.log.file}-%i.gz
log4j.appender.file.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.file.triggeringPolicy.MaxFileSize=10240000
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n . Notice: Here for testing we are hard coding the value for property "log4j.appender.file.triggeringPolicy.MaxFileSize" to something like "10240000" (around 10MB) because triggering policy does not accept values in KB/MB (like 10KB / 10MB) format hence we are putting the values in Bytes. Users can have their own value defined there. After that once we restart the AMS collector service then we should be able to see the ambari metrics collector log rotation as following: # cd /var/log/ambari-metrics-collector/
# ls -larth ambari-metrics-collector.lo*
-rw-r--r--. 1 ams hadoop 453K Aug 19 10:16 ambari-metrics-collector.log-4.gz
-rw-r--r--. 1 ams hadoop 354K Aug 19 10:17 ambari-metrics-collector.log-3.gz
-rw-r--r--. 1 ams hadoop 458K Aug 19 10:20 ambari-metrics-collector.log-2.gz
-rw-r--r--. 1 ams hadoop 497K Aug 19 10:22 ambari-metrics-collector.log-1.gz
-rw-r--r--. 1 ams hadoop 9.1M Aug 19 10:25 ambari-metrics-collector.log .
... View more
Labels:
08-19-2018
10:05 AM
1 Kudo
@Michael Bronson We see that https://bz.apache.org/bugzilla/show_bug.cgi?id=36384. which says that "Configuring triggering/rolling policies should be supported through properties" hence you will need to make sure that your are using the log4j JAR of version "log4j-1.2.17.jar" (instead of using the "log4j-1.2.15.jar") Hence make sure that your AMS collector is not using old version of log4j # mv /usr/lib/ambari-metrics-collector/log4j-1.2.15.jar /tmp/
# cp -f /usr/lib/ams-hbase/lib/log4j-1.2.17.jar /usr/lib/ambari-metrics-collector/ . Now also make sure to copy the "log4j-extras-1.2.17.jar" # cp -f /tmp/log4j_extras/apache-log4j-extras-1.2.17/apache-log4j-extras-1.2.17.jar /usr/lib/ambari-metrics-collector/ . Now edit the"ams-log4j" via ambari as following; Ambari UI --> Ambari Metrics --> Configs --> Advanced --> "Advanced ams-log4j" --> ams-log4j template (text area) . OLD Value # Direct log messages to a log file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=${ams.log.dir}/${ams.log.file}
log4j.appender.file.MaxFileSize={{ams_log_max_backup_size}}MB
log4j.appender.file.MaxBackupIndex={{ams_log_number_of_backup_files}}
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n . CHANGED VALUE log4j.appender.file=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.file.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
log4j.appender.file.rollingPolicy.maxIndex={{ams_log_number_of_backup_files}}
log4j.appender.file.rollingPolicy.ActiveFileName=${ams.log.dir}/${ams.log.file}
log4j.appender.file.rollingPolicy.FileNamePattern=${ams.log.dir}/${ams.log.file}-%i.gz
log4j.appender.file.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.file.triggeringPolicy.MaxFileSize=1048576
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n Notice: Here i am hard coding the value for property "log4j.appender.file.triggeringPolicy.MaxFileSize"for testing to something like "1048576" (around 1MB) because triggering policy does not accept values in KB/MB format hence i am putting the values in Bytes. You can have your own value defined there. .
... View more