Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2823 | 04-27-2020 03:48 AM | |
| 5475 | 04-26-2020 06:18 PM | |
| 4648 | 04-26-2020 06:05 PM | |
| 3699 | 04-13-2020 08:53 PM | |
| 5602 | 03-31-2020 02:10 AM |
01-21-2019
09:27 AM
thank you so much @Jay Kumar SenSharma for good answer. I wanted to make changes the file cluster-env.xml that auto start works properly. The way you said it changed the blueprint and it's perfectly correct, but my services are still starting after rebooting the system after about 40 minutes. can you tell me what i do step by step for active service auto start?
... View more
01-15-2019
04:51 AM
@Magical Jelly Good to know that the issue is resolved. It will be wonderful if you can mark this HCC thread as Answered by clicking on the "Accept" link then other HCC users can quickly find the answered threads.
... View more
01-06-2019
09:43 AM
Also there are too many alerts automatically after starting,,.. So is it normally started or abnormally ? @Jay Kumar SenSharma
... View more
01-01-2019
12:40 AM
@Weiss Ruth This HCC thread looks duplicate of another one. As per HCC recommendations for the same issue please open only one thread so that all the relevant answers can be found on one thread. https://community.hortonworks.com/questions/232181/sign-in-with-root-user.html?childToView=232212#answer-232212 . Copying my response from other thread: Please make sure that you are using the correct SSH port to connect
to the Sandbox. The port will be 2222 something like following: # ssh root@127.0.0.1 -p 2222
Enter password : hadoop . Or you can also use the Web Client to do SSH login to sandbox like by accessing the following URL. http://localhost:4200
... View more
12-14-2018
05:11 AM
Sometimes it is desired to see what kind of Http request ambari server makes to the outside works for troubleshooting purpose and to know the value of various http headers (like: User-Agent, Accept, Last-Modified, Content-Type, Content-Length, Connection, ETag, Server, X-Cache ...etc) which ambari-server uses to make calls to Ambari Public repo ...etc. So in order to achieve the Http Logging of ambari server made http requests we can do the following: Step-1). Create a file "/etc/ambari-server/conf/http.logging.properties" with the following content: .level=INFO
handlers=java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.level=ALL
sun.net.www.protocol.level=ALL
sun.net.www.protocol.http.HttpURLConnection.level=ALL
debug.level=ALL . Step-2). Inside the "/var/lib/ambari-server/ambari-env.sh" file variable "AMBARI_JVM_ARGS" make sure to add the following parameter "-Djava.util.logging.config.file=/etc/ambari-server/conf/http.logging.properties" to enable the http logging. Example: export AMBARI_JVM_ARGS="$AMBARI_JVM_ARGS -Xms512m -Xmx2048m -XX:MaxPermSize=128m -Djava.security.auth.login.config=$ROOT/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -Dcom.sun.jndi.ldap.connect.pool.protocol=\"plain ssl\" -Dcom.sun.jndi.ldap.connect.pool.maxsize=20 -Dcom.sun.jndi.ldap.connect.pool.timeout=300000 -Djava.util.logging.config.file=/etc/ambari-server/conf/http.logging.properties" . Step-3). Restart Ambari Server. # ambari-server restart . Step-4). Now check the "/var/log/ambari-server/ambari-server.out" file to see the http logging. Example Output: Dec 14, 2018 4:52:29 AM sun.net.www.protocol.http.HttpURLConnection plainConnect0
FINEST: ProxySelector Request for http://public-repo-1.hortonworks.com/HDP/ubuntu14/3.x/updates/3.0.0.0/HDP-3.0.0.0-1634.xml
Dec 14, 2018 4:52:29 AM sun.net.www.http.HttpClient logFinest
FINEST: KeepAlive stream retrieved from the cache, sun.net.www.http.HttpClient(http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.0.0.0/HDP-3.0.0.0-1634.xml)
Dec 14, 2018 4:52:29 AM sun.net.www.protocol.http.HttpURLConnection plainConnect0
FINEST: Proxy used: DIRECT
Dec 14, 2018 4:52:29 AM sun.net.www.protocol.http.HttpURLConnection writeRequests
FINE: sun.net.www.MessageHeader@5b3d7d395 pairs: {GET /HDP/ubuntu14/3.x/updates/3.0.0.0/HDP-3.0.0.0-1634.xml HTTP/1.1: null}{User-Agent: Java/1.8.0_112}{Host: public-repo-1.hortonworks.com}{Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2}{Connection: keep-alive}
Dec 14, 2018 4:52:29 AM sun.net.www.http.HttpClient logFinest
FINEST: KeepAlive stream used: http://public-repo-1.hortonworks.com/HDP/ubuntu14/3.x/updates/3.0.0.0/HDP-3.0.0.0-1634.xml
Dec 14, 2018 4:52:29 AM sun.net.www.protocol.http.HttpURLConnection getInputStream0
FINE: sun.net.www.MessageHeader@7854dac13 pairs: {null: HTTP/1.1 200 OK}{Content-Type: application/xml}{Content-Length: 2597}{Connection: keep-alive}{Last-Modified: Thu, 12 Jul 2018 23:45:36 GMT}{Accept-Ranges: bytes}{Server: AmazonS3}{Date: Thu, 13 Dec 2018 12:28:28 GMT}{ETag: "abcdefgh42ffdc96d0ab1de61e0dc36cd3"}{Age: 59042}{X-Cache: Hit from cloudfront}{Via: 1.1 frontend_cloud.example.net (CloudFront)}{X-Amz-Cf-Id: Sn-ABCDEFGH8xvcOmkqoeHg_UCYmwyRU9tgDonWfAd4TMXoPdMabcdefgh==}
Dec 14, 2018 4:52:29 AM sun.net.www.protocol.http.HttpURLConnection plainConnect0
FINEST: ProxySelector Request for http://public-repo-1.hortonworks.com/HDP/centos7-ppc/3.x/updates/3.0.0.0/HDP-3.0.0.0-1634.xml
Dec 14, 2018 4:52:29 AM sun.net.www.http.HttpClient logFinest
FINEST: KeepAlive stream retrieved from the cache, sun.net.www.http.HttpClient(http://public-repo-1.hortonworks.com/HDP/ubuntu14/3.x/updates/3.0.0.0/HDP-3.0.0.0-1634.xml)
Dec 14, 2018 4:52:29 AM sun.net.www.protocol.http.HttpURLConnection plainConnect0
FINEST: Proxy used: DIRECT
Dec 14, 2018 4:52:29 AM sun.net.www.protocol.http.HttpURLConnection writeRequests
FINE: sun.net.www.MessageHeader@32b9674a5 pairs: {GET /HDP/centos7-ppc/3.x/updates/3.0.0.0/HDP-3.0.0.0-1634.xml HTTP/1.1: null}{User-Agent: Java/1.8.0_112}{Host: public-repo-1.hortonworks.com}{Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2}{Connection: keep-alive}
Dec 14, 2018 4:52:29 AM sun.net.www.http.HttpClient logFinest
FINEST: KeepAlive stream used: http://public-repo-1.hortonworks.com/HDP/centos7-ppc/3.x/updates/3.0.0.0/HDP-3.0.0.0-1634.xml
Dec 14, 2018 4:52:29 AM sun.net.www.protocol.http.HttpURLConnection getInputStream0
FINE: sun.net.www.MessageHeader@606955ef13 pairs: {null: HTTP/1.1 200 OK}{Content-Type: application/xml}{Content-Length: 2609}{Connection: keep-alive}{Date: Mon, 10 Dec 2018 09:44:35 GMT}{Last-Modified: Fri, 13 Jul 2018 04:43:55 GMT}{ETag: "abcdefgh42ffdc96d0ab1de61e0dc36cd3"}{Accept-Ranges: bytes}{Server: AmazonS3}{Age: 59042}{X-Cache: Hit from cloudfront}{Via: 1.1 frontend_cloud.example.net (CloudFront)}{X-Amz-Cf-Id: ABCDEFGHb3LeLucDbVt5BgHNJmhZRjQPUYPMZS7zHO7oqR1Kabcdefgh==}
.
... View more
Labels:
11-15-2018
03:26 AM
When we access an HDFS directory using Ambari FileView then we see max 5000 sub directories/contents per page via File View. As following: We see a message "Showing 5000 files or folders of xxxx". The default "views.files.max.files.per.page" limit is set to 5000 as following: /ambari/view/commons/hdfs/FileOperationService.java#L53-L56 This property was added from Ambari 2.6 as part of JIRA: https://issues.apache.org/jira/browse/AMBARI-21890 to avoid browser hung state while opening a HDFS folder which has huge number of files. So if user wants to see more items per page then it can be achieved as following: 1. In order to set/add a new property with the value with a bit large page size please adjust the following property inside "ambari.properties". For example i am setting it to 7000 # grep 'views.files.max.files.per.page' /etc/ambari-server/conf/ambari.properties
views.files.max.files.per.page=7000 2. Then restart the Ambari Server. # ambari-server restart 3. Once ambari server is restarted then check the File View again to verify if it is showing the mentioned number of per pages or not? 4. After accessing the File View directories we should also see the ambari-server.log the pagination value reflected properly or not? # grep 'maxFilesPerPageProperty' /var/log/ambari-server/ambari-server.log
INFO [ambari-client-thread-38] FileOperationService:69 - maxFilesPerPageProperty = 7000 .
... View more
Labels:
08-19-2018
10:40 AM
1 Kudo
Sometimes it is desired to have the logs rotated as well as compressed. We can use log4j extras in order to achieve the same. For processes like NameNode / DataNode...etc we can use the approach described in the article. https://community.hortonworks.com/articles/50058/using-log4j-extras-how-to-rotate-as-well-as-zip-th.html However when we try to use the same approach in Ambari 2.6 for ambari metrics collector log compression and rotation then it will not work and we might see some warnings / errors inside the "" something like following: log4j:WARN Failed to set property [triggeringPolicy] to value "org.apache.log4j.rolling.SizeBasedTriggeringPolicy".
log4j:WARN Failed to set property [rollingPolicy] to value "org.apache.log4j.rolling.FixedWindowRollingPolicy".
log4j:WARN Please set a rolling policy for the RollingFileAppender named 'file'
log4j:ERROR No output stream or file set for the appender named [file].
(OR)
log4j:ERROR A "org.apache.log4j.rolling.SizeBasedTriggeringPolicy" object is not
assignable to a "org.apache.log4j.rolling.RollingPolicy" variable.
log4j:ERROR The class "org.apache.log4j.rolling.RollingPolicy" was loaded by
log4j:ERROR [sun.misc.Launcher$AppClassLoader@2328c243] whereas object of type
log4j:ERROR "org.apache.log4j.rolling.SizeBasedTriggeringPolicy" was loaded by [sun.misc.Launcher$AppClassLoader@2328c243]. . This is because we see that there is a b ug reported as https://bz.apache.org/bugzilla/show_bug.cgi?id=36384. which says that in some older version of log4j these rolling policies were not configurable via log4j.properties (those were only configurable via log4j.xml) This bug added a feature in log4j to achieve "Configuring triggering/rolling policies should be supported through properties" hence you will need to make sure that you are using the log4j JAR of version "log4j-1.2.17.jar" (instead of using the "log4j-1.2.15.jar") Hence if users wants to use the rotation and zipping feature of log4j then make sure that your AMS collector is not using old version of log4j. This article just describes a workaround hence follow this suggestion at your own risk because here we are going to change the default log4j jar shipped with AMS collector lib. # mv /usr/lib/ambari-metrics-collector/log4j-1.2.15.jar /tmp/
# cp -f /usr/lib/ams-hbase/lib/log4j-1.2.17.jar /usr/lib/ambari-metrics-collector/ . Now also make sure to copy the "log4j-extras-1.2.17.jar" on the ambari metrics collector host which provides the various log rotation policies. # mkdir /tmp/log4j_extras
# curl http://apache.mirrors.tds.net/logging/log4j/extras/1.2.17/apache-log4j-extras-1.2.17-bin.zip -o /tmp/log4j_extras/apache-log4j-extras-1.2.17-bin.zip
# cd /tmp/log4j_extras
# unzip apache-log4j-extras-1.2.17-bin.zip
# cp -f /tmp/log4j_extras/apache-log4j-extras-1.2.17/apache-log4j-extras-1.2.17.jar /usr/lib/ambari-metrics-collector/ . Users need to also edit the "ams-log4j" via ambari to add the customized appender. Ambari UI --> Ambari Metrics --> Configs --> Advanced --> "Advanced ams-log4j" --> ams-log4j template (text area) OLD default Value (please comment out the following) # Direct log messages to a log file
#log4j.appender.file=org.apache.log4j.RollingFileAppender
#log4j.appender.file.File=${ams.log.dir}/${ams.log.file}
#log4j.appender.file.MaxFileSize={{ams_log_max_backup_size}}MB
#log4j.appender.file.MaxBackupIndex={{ams_log_number_of_backup_files}}
#log4j.appender.file.layout=org.apache.log4j.PatternLayout
#log4j.appender.file.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n . New Appender Config log4j.appender.file=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.file.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
log4j.appender.file.rollingPolicy.maxIndex={{ams_log_number_of_backup_files}}
log4j.appender.file.rollingPolicy.ActiveFileName=${ams.log.dir}/${ams.log.file}
log4j.appender.file.rollingPolicy.FileNamePattern=${ams.log.dir}/${ams.log.file}-%i.gz
log4j.appender.file.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.file.triggeringPolicy.MaxFileSize=10240000
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n . Notice: Here for testing we are hard coding the value for property "log4j.appender.file.triggeringPolicy.MaxFileSize" to something like "10240000" (around 10MB) because triggering policy does not accept values in KB/MB (like 10KB / 10MB) format hence we are putting the values in Bytes. Users can have their own value defined there. After that once we restart the AMS collector service then we should be able to see the ambari metrics collector log rotation as following: # cd /var/log/ambari-metrics-collector/
# ls -larth ambari-metrics-collector.lo*
-rw-r--r--. 1 ams hadoop 453K Aug 19 10:16 ambari-metrics-collector.log-4.gz
-rw-r--r--. 1 ams hadoop 354K Aug 19 10:17 ambari-metrics-collector.log-3.gz
-rw-r--r--. 1 ams hadoop 458K Aug 19 10:20 ambari-metrics-collector.log-2.gz
-rw-r--r--. 1 ams hadoop 497K Aug 19 10:22 ambari-metrics-collector.log-1.gz
-rw-r--r--. 1 ams hadoop 9.1M Aug 19 10:25 ambari-metrics-collector.log .
... View more
Labels:
10-31-2018
12:02 PM
i have followed the steps but after changing restart zeppeline i am getting below error:- help me to resolve HTTP ERROR: 503 Problem accessing /. Reason: Service Unavailable
... View more
02-28-2019
05:23 PM
Thank you for the very helpful article.
... View more
08-29-2017
09:29 AM
1 Kudo
Many times we see some repeated logging inside our log files. For example in case of ambari-server.log we see the following kind of repeated logging inside the log. WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackArtifacts(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity. We might see the above kind of warning messages repeated many times. # grep 'public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RequestService.getRequests' /var/log/ambari-server/ambari-server.log
150
- These are actually harmless WARNING messages, but many times it is desired to make sure that they are not logged, That way we can save some disk space issues and have a clean log. - Every time it is not possible to change the rootLogger to "ERROR" like following to avoid printing some INFO/WARNING messages, Because it will cause suppressing other useful INFO/WARNING messages not t be logged. log4j.rootLogger=ERROR,file - In order to avoid logging of few specific log entries based on the Strings irrespective of the various different logging level (INFO/WARNING/ERROR/DEBUG) those entries are coming from. - In this case suppose, if we do not want to log the line which has "public javax.ws.rs.core.Response" entry in it at any place then we can make use of StringMatchFilter feature of log4j as following: . Step-1). Edit the "/etc/ambari-serevr/conf/log4j.properties" and add the following 3 lines in it Just below to the "file" log appender. log4j.appender.file.filter.01=org.apache.log4j.varia.StringMatchFilter
log4j.appender.file.filter.01.StringToMatch=public javax.ws.rs.core.Response
log4j.appender.file.filter.01.AcceptOnMatch=false Now the log4j.properties audit log appender will look like following: # Direct log messages to a log file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=${ambari.log.dir}/${ambari.log.file}
log4j.appender.file.MaxFileSize=80MB
log4j.appender.file.MaxBackupIndex=60
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{DATE} %5p [%t] %c{1}:%L - %m%n
log4j.appender.file.filter.01=org.apache.log4j.varia.StringMatchFilter
log4j.appender.file.filter.01.StringToMatch=public javax.ws.rs.core.Response
log4j.appender.file.filter.01.AcceptOnMatch=false NOTE: we can use as many filters we want. We will only need to change the filter number like "log4j.appender.file.filter.01", "log4j.appender.file.filter.02", "log4j.appender.file.filter.03" with different "StringToMatch" values. Step-2). Move the OLD ambari-server logs and restart the ambari-server # mv /var/log/ambari-server /var/log/ambari-server_OLD
# ambari-server restart . Step-3). Put the ambari-server.log in tail and then restart ambari server to see if the following line entry is gone from the ambari-server.log now and you should not see those lines again. # grep 'public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RequestService.getRequests' /var/log/ambari-server/ambari-server.log .
... View more
Labels: