Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2720 | 04-27-2020 03:48 AM | |
| 5280 | 04-26-2020 06:18 PM | |
| 4445 | 04-26-2020 06:05 PM | |
| 3570 | 04-13-2020 08:53 PM | |
| 5377 | 03-31-2020 02:10 AM |
08-19-2018
01:20 AM
1 Kudo
@Michael Bronson Please try the below steps and check if it works or not? (I tested this on Ambari 2.6.2.2 and it worked fine without any issue) Step1). Download and Setup the log4j extras Jar: # mkdir /tmp/log4j_extras
# curl http://apache.mirrors.tds.net/logging/log4j/extras/1.2.17/apache-log4j-extras-1.2.17-bin.zip -o /tmp/log4j_extras/apache-log4j-extras-1.2.17-bin.zip
# cd /tmp/log4j_extras
# unzip apache-log4j-extras-1.2.17-bin.zip
# cp -f /tmp/log4j_extras/apache-log4j-extras-1.2.17/apache-log4j-extras-1.2.17.jar /usr/lib/ams-hbase/lib (OR) in ambari UI you will find the property "hbase_classpath_additional" inside the AMS configs to point to the log4j extras jar. . Step2). Edit the "ams-hbase-log4j template" From ambari side navigate to amb ari UI Ambari UI --> Ambari Metrics --> Configs --> Advanced --> "Advanced ams-hbase-log4j" --> ams-hbase-log4j template (text area) And then "Comment out the following part:" # Rolling File Appender
#log4j.appender.RFA=org.apache.log4j.RollingFileAppender
#log4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}
#log4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}
#log4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}
#log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n . Now add the following part to define your own RFA appender as following: log4j.appender.RFA=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}
log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n
log4j.appender.RFA.rollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
log4j.appender.RFA.rollingPolicy.ActiveFileName=${hbase.log.dir}/${hbase.log.file}
log4j.appender.RFA.rollingPolicy.FileNamePattern=${hbase.log.dir}/${hbase.log.file}-.%d{yyyyMMdd}.log.gz . Then restart the AMS service and then wait for some time to see the rolling. I can see the following kind of Zip "hbase-ams-master-newhwx2.example.com.log-.20180817.log.gz" creation at my end next day: # ls -lart /var/log/ambari-metrics-collector/
-rw-r--r--. 1 ams hadoop 9177961 Aug 18 00:00 hbase-ams-master-newhwx2.example.com.log-.20180817.log.gz
-rw-r--r--. 1 ams hadoop 7520915 Aug 18 19:43 gc.log-201808170101
-rw-r--r--. 1 ams hadoop 2506556 Aug 18 19:47 collector-gc.log-201808170220
-rw-r--r--. 1 ams hadoop 1869725 Aug 19 01:14 hbase-ams-master-newhwx2.example.com.log
drwxr-xr-x. 2 ams hadoop 8192 Aug 19 01:14 .
-rw-r--r--. 1 ams hadoop 234 Aug 19 01:14 hbase-ams-master-newhwx2.example.com.out
-rw-r--r--. 1 ams hadoop 678 Aug 19 01:16 ambari-metrics-collector-startup.out
-rw-r--r--. 1 ams hadoop 6485518 Aug 19 01:16 SecurityAuth.audit
-rw-r--r--. 1 ams hadoop 2008 Aug 19 01:17 ambari-metrics-collector.out
-rw-r--r--. 1 ams hadoop 40038004 Aug 19 01:18 ambari-metrics-collector.log
-rw-r--r--. 1 ams hadoop 6077 Aug 19 01:18 collector-gc.log-201808190114
-rw-r--r--. 1 ams hadoop 89210 Aug 19 01:18 gc.log-201808190114 .
... View more
08-18-2018
01:27 PM
@venu
gopal
As we see the error as following: WARNING 2018-08-18 12:39:00,080 NetUtil.py:101 - Failed to connect to https://localhost:8440/ca due to [Errno 111] Connection refused Which indicates that you might not have configured the ambari server host name correctly inside the "/etc/ambari-agent/conf/ambari-agent.ini" file (that why it is trying to connect to a localhost) to point to your Ambari Server Hostname. # grep 'hostname' /etc/ambari-agent/conf/ambari-agent.ini
hostname=ambariserver.example.com Please make sure that the ambari agent.ini file is using correct ambari server hostname. Also from ambari agent host you should be able to access the ambari server host & port using the following command to verify if there is any Port access / firewall issue. So from amabri Agent host please verify if it can connect to ambari server? # nc -v $AMBARI_HOST 8440 Also on the ambari server host please verify it's correct hostname and also if it has opened port 8440 or not? And also if the IPtables/Firewall is disabled or not?
# hostname -f
# service iptables stop .
... View more
08-17-2018
12:17 AM
1 Kudo
@Sahil
M
Great to know that your current issue reported in this thread is resolved. It would be great to isolate different issues as part of different threads that way different HCC users can quickly find and browse the correct answers. Can you please mark this HCC thread as answered by clicking the "Accept" button on the correct answer and we will continue on another thread for the other issue that you are facing.
... View more
08-16-2018
11:34 PM
2 Kudos
@Sahil
M
Then please make sure that the same password is also being used inside the Hive Configs from amabri UI. Ambari UI --> Hive --> Configs --> Advanced --> "Hive Metastore" section "Database Password"
... View more
08-16-2018
11:16 PM
1 Kudo
@Sahil
M
Have you made any config changes to Hive Service recently on your Sandbox? Like any Hive - MySQL setting changes? I see the message as following: Metastore connection URL: jdbc:mysql://sandbox-hdp.hortonworks.com/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: root
Underlying cause: java.sql.SQLException : Access denied for user 'root'@'sandbox-hdp.hortonworks.com' (using password: YES) This can happen if the root user does not have permission to make query from host 'sandbox-hdp.hortonworks.com' OR if the mysql "root" user password is incorrect/changed. So please login in to your MySQL DB and then try running the following queries inside the sandbox. Or use http://localhost:4200 URL to execute commands. # ssh root@127.0.0.1 -p 2222
Enter Password: hadoop Once you are inside the sandbox terminal try this: # mysql -u root -p
Enter password: hadoop If the password does not work then please use the following link to reset the password for your MySQL DB to your desired one. https://community.hortonworks.com/questions/202738/hdp-265-mysql-password-for-the-root-user.html mysql> use mysql
mysql> SELECT host,User FROM user where User='root'; In the above output if you do not see something this +-------------------------------+------+
| host | User |
+-------------------------------+------+
| % | root |
| 127.0.0.1 | root |
| ::1 | root |
| localhost | root |
| sandbox.hortonworks.com | root |
| sandbox-hdp.hortonworks.com | root |
+-------------------------------+------+ If you do not see "sandbox-hdp.hortonworks.com" entry as above then please try to add it : mysql> CREATE USER 'root'@'sandbox-hdp.hortonworks.com' IDENTIFIED BY '<ROOT_PASSWORD>';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'sandbox-hdp.hortonworks.com' IDENTIFIED BY '<ROOT_PASSWORD>';
mysql> FLUSH PRIVILEGES; Then try again. Please replace the <ROOT_PASSWORD> with t\he correct root password. .
... View more
08-16-2018
04:20 AM
@Lian Jiang Correct we do not need to worry about the "yarn_hbase_java_io_tmpdir" property as it is basically being controlled by the property "hbase_java_io_tmpdir" hence if we do not set the hbase_java_io_tmpdir property on our own then the default value for both the property will be "/tmp" yarn_hbase_java_io_tmpdir = default("/configurations/yarn-hbase-env/hbase_java_io_tmpdir", "/tmp") As your reader seems to be working fine now. hence it will be great if you ca mark this thread as answered so that it will be useful for other HCC users to know the details about these properties and can quickly browse the answers.
... View more
08-16-2018
02:24 AM
@Serg Serg Even after following the Article: javapython-updates-and-ambari-agent-tls-settings If you still see the SSL error then please refer to the below article : JDK Changes Causing Ambari Server/Agent Registration Please check the following file isnide your Ambari Server to verify some of the algorithms. To ensure that it does not have '3DES_EDE_CBC' # grep 'jdk.tls.disabledAlgorithms' $JAVA_HOME/jre/lib/security/java.security
jdk.tls.disabledAlgorithms=SSLv3, RC4, MD5withRSA, DH keySize < 1024, \
EC keySize < 224, DES40_CBC, RC4_40 Here the $JAVA_HOME value should be the one which is mentioned in the "java.home" property of ambari.properties file. Example: # grep 'java.home' /etc/ambari-server/conf/ambari.properties
java.home=/usr/jdk64/jdk1.8.0_112 . So can you please share your exact java version details as well ? Along with ambari-agent logs and ambari.properties file.
... View more
08-15-2018
11:23 PM
@Lian Jiang For other HCC users reference: Adding a link to the other thread which describes about this issue in more detail. https://community.hortonworks.com/questions/212329/hdp30-timeline-service-v2-reader-cannot-create-zoo.html?childToView=212445#answer-212445
... View more
08-15-2018
11:15 PM
@Lian Jiang Good to know that with the help of "hbase_java_io_tmpdir" your issue partially solved. Regarding the "yarn_hbase_java_io_tmpdir" property, you can find it inside the "Advanced yarn-hbase-env" as following: Ambari UI --> Yarn --> Configs --> Advanced --> "Advanced yarn-hbase-env" --> "hbase-env template" export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC -XX:ErrorFile=$HBASE_LOG_DIR/hs_err_pid%p.log -Djava.io.tmpdir={{yarn_hbase_java_io_tmpdir}}" . So just replace the value of {{yarn_hbase_java_io_tmpdir}} with your desired value. The Default value of the "yarn_hbase_java_io_tmpdir" is calculated as following: # grep 'yarn_hbase_java_io_tmpdir' /var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/params_linux.py
yarn_hbase_java_io_tmpdir = default("/configurations/yarn-hbase-env/hbase_java_io_tmpdir", "/tmp")
... View more
08-15-2018
01:31 PM
@Michael Bronson Please define a "rollingPolicy" inside the RollingFileAppender something as described in the following article: https://community.hortonworks.com/articles/50058/using-log4j-extras-how-to-rotate-as-well-as-zip-th.html Create a new appender like ZIPRFA like the mentioned article log4j.appender.XXXXXX.rollingPolicy.FileNamePattern=${hive.log.dir}/${hive.log.file}-.%d{yyyyMMdd}.log.gz
.
... View more