Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2506 | 04-27-2020 03:48 AM | |
| 4972 | 04-26-2020 06:18 PM | |
| 4054 | 04-26-2020 06:05 PM | |
| 3287 | 04-13-2020 08:53 PM | |
| 5010 | 03-31-2020 02:10 AM |
08-22-2018
05:29 PM
Hi @Jay Kumar SenSharma @Matt Burgess @Geoffrey Shelton Okot, please can any one help me out on this. any help will appreciated alot. Thanks, Deepak
... View more
08-22-2018
02:39 PM
@Jay Kumar SenSharma if i want to configure a data lake for business intelligence, what i have to use, HDP or HDF?
... View more
08-19-2018
10:40 AM
1 Kudo
Sometimes it is desired to have the logs rotated as well as compressed. We can use log4j extras in order to achieve the same. For processes like NameNode / DataNode...etc we can use the approach described in the article. https://community.hortonworks.com/articles/50058/using-log4j-extras-how-to-rotate-as-well-as-zip-th.html However when we try to use the same approach in Ambari 2.6 for ambari metrics collector log compression and rotation then it will not work and we might see some warnings / errors inside the "" something like following: log4j:WARN Failed to set property [triggeringPolicy] to value "org.apache.log4j.rolling.SizeBasedTriggeringPolicy".
log4j:WARN Failed to set property [rollingPolicy] to value "org.apache.log4j.rolling.FixedWindowRollingPolicy".
log4j:WARN Please set a rolling policy for the RollingFileAppender named 'file'
log4j:ERROR No output stream or file set for the appender named [file].
(OR)
log4j:ERROR A "org.apache.log4j.rolling.SizeBasedTriggeringPolicy" object is not
assignable to a "org.apache.log4j.rolling.RollingPolicy" variable.
log4j:ERROR The class "org.apache.log4j.rolling.RollingPolicy" was loaded by
log4j:ERROR [sun.misc.Launcher$AppClassLoader@2328c243] whereas object of type
log4j:ERROR "org.apache.log4j.rolling.SizeBasedTriggeringPolicy" was loaded by [sun.misc.Launcher$AppClassLoader@2328c243]. . This is because we see that there is a b ug reported as https://bz.apache.org/bugzilla/show_bug.cgi?id=36384. which says that in some older version of log4j these rolling policies were not configurable via log4j.properties (those were only configurable via log4j.xml) This bug added a feature in log4j to achieve "Configuring triggering/rolling policies should be supported through properties" hence you will need to make sure that you are using the log4j JAR of version "log4j-1.2.17.jar" (instead of using the "log4j-1.2.15.jar") Hence if users wants to use the rotation and zipping feature of log4j then make sure that your AMS collector is not using old version of log4j. This article just describes a workaround hence follow this suggestion at your own risk because here we are going to change the default log4j jar shipped with AMS collector lib. # mv /usr/lib/ambari-metrics-collector/log4j-1.2.15.jar /tmp/
# cp -f /usr/lib/ams-hbase/lib/log4j-1.2.17.jar /usr/lib/ambari-metrics-collector/ . Now also make sure to copy the "log4j-extras-1.2.17.jar" on the ambari metrics collector host which provides the various log rotation policies. # mkdir /tmp/log4j_extras
# curl http://apache.mirrors.tds.net/logging/log4j/extras/1.2.17/apache-log4j-extras-1.2.17-bin.zip -o /tmp/log4j_extras/apache-log4j-extras-1.2.17-bin.zip
# cd /tmp/log4j_extras
# unzip apache-log4j-extras-1.2.17-bin.zip
# cp -f /tmp/log4j_extras/apache-log4j-extras-1.2.17/apache-log4j-extras-1.2.17.jar /usr/lib/ambari-metrics-collector/ . Users need to also edit the "ams-log4j" via ambari to add the customized appender. Ambari UI --> Ambari Metrics --> Configs --> Advanced --> "Advanced ams-log4j" --> ams-log4j template (text area) OLD default Value (please comment out the following) # Direct log messages to a log file
#log4j.appender.file=org.apache.log4j.RollingFileAppender
#log4j.appender.file.File=${ams.log.dir}/${ams.log.file}
#log4j.appender.file.MaxFileSize={{ams_log_max_backup_size}}MB
#log4j.appender.file.MaxBackupIndex={{ams_log_number_of_backup_files}}
#log4j.appender.file.layout=org.apache.log4j.PatternLayout
#log4j.appender.file.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n . New Appender Config log4j.appender.file=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.file.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
log4j.appender.file.rollingPolicy.maxIndex={{ams_log_number_of_backup_files}}
log4j.appender.file.rollingPolicy.ActiveFileName=${ams.log.dir}/${ams.log.file}
log4j.appender.file.rollingPolicy.FileNamePattern=${ams.log.dir}/${ams.log.file}-%i.gz
log4j.appender.file.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.file.triggeringPolicy.MaxFileSize=10240000
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n . Notice: Here for testing we are hard coding the value for property "log4j.appender.file.triggeringPolicy.MaxFileSize" to something like "10240000" (around 10MB) because triggering policy does not accept values in KB/MB (like 10KB / 10MB) format hence we are putting the values in Bytes. Users can have their own value defined there. After that once we restart the AMS collector service then we should be able to see the ambari metrics collector log rotation as following: # cd /var/log/ambari-metrics-collector/
# ls -larth ambari-metrics-collector.lo*
-rw-r--r--. 1 ams hadoop 453K Aug 19 10:16 ambari-metrics-collector.log-4.gz
-rw-r--r--. 1 ams hadoop 354K Aug 19 10:17 ambari-metrics-collector.log-3.gz
-rw-r--r--. 1 ams hadoop 458K Aug 19 10:20 ambari-metrics-collector.log-2.gz
-rw-r--r--. 1 ams hadoop 497K Aug 19 10:22 ambari-metrics-collector.log-1.gz
-rw-r--r--. 1 ams hadoop 9.1M Aug 19 10:25 ambari-metrics-collector.log .
... View more
Labels:
08-17-2018
12:17 AM
1 Kudo
@Sahil
M
Great to know that your current issue reported in this thread is resolved. It would be great to isolate different issues as part of different threads that way different HCC users can quickly find and browse the correct answers. Can you please mark this HCC thread as answered by clicking the "Accept" button on the correct answer and we will continue on another thread for the other issue that you are facing.
... View more
08-13-2018
02:23 AM
@Jay Kumar SenSharma
I have used toolkit to generate keystore.jks and truststore.jks, and enabled ssl for nifi.But another problem encountered. ./tls-toolkit.sh standalone -n 'hadfnode1' -C 'CN=admin/admin, OU=NIFI.COM' -f /usr/hdf/current/nifi/conf/nifi.properties -o'./target2' -K hadoop -P hadoop -S hadoop I access https://xxx:9091/nifi and have loaded p12 file into broswer.But it accured 'Insufficient Permissions' exception. Thanks.
... View more
08-11-2018
05:10 AM
I'm glad that all sorted now another way was deleting the particular node from the cluster and then readding it and after adding spark client on it. I have recently done that one of my test cluster recently and it worked
... View more
08-10-2018
05:47 AM
@Jay , thank you so much we do the testing and they are fine , now we need to do the steps VIA API
... View more
08-08-2018
01:57 AM
1 Kudo
@Harry Li Please try this to verify your JDBC driver version: On Ambari Server Host: # mkdir /tmp/JDBC
# cd /tmp/JDBC
# cp -f /var/lib/ambari-server/resources/mysql-connector-java.jar /tmp/JDBC/
# jar xvf mysql-connector-java.jar
. The grep the version: # grep 'Implementation-Versio' META-INF/MANIFEST.MF
Implementation-Version: 8.0.11
# cat META-INF/services/java.sql.Driver
com.mysql.cj.jdbc.Driver . So if your MySQL JDBC Driver version is correct (MySQL 5 JDBC Driver then you might be seeing something like following instead: # grep 'Implementation-Versio' META-INF/MANIFEST.MF
Implementation-Version: 5.1.25-SNAPSHOT
# cat META-INF/services/java.sql.Driver
com.mysql.jdbc.Driver . MySQL connector 5.1 download link: https://dev.mysql.com/downloads/connector/j/5.1.html
... View more
08-07-2018
11:33 PM
1 Kudo
@Maxim
Neaga
What error are you gettin while connecting to hive using beeling? Are you sure that the URL which you entered is correct... Becaue looks like you are trying to connect to MySQL database instead of hiveServer2. Following is what i tried: # useradd test
# usermod -G hadoop test
# su - hdfs -c "hdfs dfs -mkdir /user/test"
# su - hdfs -c "hdfs dfs -chown -R test:hadoop /user/test"
. The connect to Beeline using test user: # su - test
$ beeline
Beeline version 1.2.1000.2.6.5.0-292 by Apache Hive
beeline> !connect jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
. Execute queries: beeline> !connect jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Connecting to jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Enter username for jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
Enter password for jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
Connected to: Apache Hive (version 1.2.1000.2.6.5.0-292)
Driver: Hive JDBC (version 1.2.1000.2.6.5.0-292)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://newhwx1.example.com:2181,newh> show databases;
+----------------+--+
| database_name |
+----------------+--+
| default |
+----------------+--+
1 row selected (0.627 seconds) Here the URL needs to be correct. For example in above case i am connecting to HiveServer2 using the Zookeepre quorum which i found in the Ambari Hive Summary page. jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 . Hive URL you can get from here in Ambari UI: .
... View more
08-01-2018
01:13 AM
@Harry Li It looks like a Symlink issue. Looks like due to some reason the symlink is not created properly. The "lib" should be actually here # ls -l /usr/hdp/2.6.5.0-292/slider/lib/ . The "/usr/hdp/current/slider-client" path should be ideally a symlink something like following: # ls -l /usr/hdp/current/slider-client
lrwxrwxrwx. 1 root root 27 Jun 28 22:44 /usr/hdp/current/slider-client -> /usr/hdp/2.6.5.0-292/slider
NOTE: here the version "2.6.5.0-292" might be different based on your Stack version. So please change the version accordingly. . So you might want to create a symlink on your own. First if you have "slider-client" as a Directory or as a Symlink? # ls -ld /usr/hdp/current/slider-client
lrwxrwxrwx. 1 root root 27 Jun 28 22:44 /usr/hdp/current/slider-client -> /usr/hdp/2.6.5.0-292/slider If you find that "/usr/hdp/current/slider-client" is a directory then move it. # mv /usr/hdp/current/slider-client /usr/hdp/current/slider-client_OLD Then create a symlink instead like: # ln -s /usr/hdp/2.6.5.0-292/slider /usr/hdp/current/slider-client NOTE: here the version "2.6.5.0-292" might be different based on your Stack version. So please change the version accordingly. .
... View more