Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2718 | 04-27-2020 03:48 AM | |
| 5279 | 04-26-2020 06:18 PM | |
| 4445 | 04-26-2020 06:05 PM | |
| 3567 | 04-13-2020 08:53 PM | |
| 5377 | 03-31-2020 02:10 AM |
08-28-2018
02:10 AM
@Sahil
M
From where did you get the "10.99.162.XX" IP Address which you wanted to make as static? Can you use the same 172.x.x.x address as "static" instead of defining a completely new range of address? Example: https://ashfaqshinwary.wordpress.com/2018/04/02/horton-works-assign-static-ip-to-hdp-sandbox-v2-x/ .
... View more
08-28-2018
01:48 AM
@Sahil
M
Have you restarted the VM ? (specially the network services) after making the changes? How are you trying to do the SSH can you please share the exact command and the output whihc you get while doing SSH ?
... View more
08-28-2018
01:25 AM
1 Kudo
@Sahil M Do you want something like the one mentioned in the following HCC thread? https://community.hortonworks.com/questions/79618/25-sandbox-static-ip.html
... View more
08-27-2018
12:21 PM
@Arjun
Das
As mentioned earlier instead of posting just few lines (using tailf) of error it will be really great if you can share the Whole stacktrace. We can quickly resolve the issue if we have the complete information once. So can you please share the following details: 1. Why do we see different hostname in your JDBC URL like sometimes it has "localhost" and in some places "N9722" inside ambari.properties? 2. Are you sure that MySQL database is running fine and listening to 3306 port on the correct host? # netstat -tnlpa | grep 3306 . Also please check if you are able to connect to the MySQL instance using the telnet? # telnet localhost 3306
# telnet N9722 3306 3. Are you sure that you want to set the ambari DB owner as "root" ? Or this is by mistake. server.jdbc.user.name=root 4. Have you already setup the ambari DB as mentioned in the following doc: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-administration/content/using_ambari_with_mysql.html Specially the following part: # mysql -u root -p
CREATE USER '<AMBARIUSER>'@'%' IDENTIFIED BY '<AMBARIPASSWORD>';
GRANT ALL PRIVILEGES ON *.* TO '<AMBARIUSER>'@'%';
CREATE USER '<AMBARIUSER>'@'localhost' IDENTIFIED BY '<AMBARIPASSWORD>';
GRANT ALL PRIVILEGES ON *.* TO '<AMBARIUSER>'@'localhost';
CREATE USER '<AMBARIUSER>'@'<AMBARISERVERFQDN>' IDENTIFIED BY '<AMBARIPASSWORD>';
GRANT ALL PRIVILEGES ON *.* TO '<AMBARIUSER>'@'<AMBARISERVERFQDN>';
FLUSH PRIVILEGES .
... View more
08-27-2018
10:12 AM
@Arjun
Das
You are using mysql-connector-java JDBC driver of version 8 which is not right. server.jdbc.driver.path=/usr/lib/java/mysql-connector-java-8.0.12.jar Please use mysql 5.6/5.7 JAR. Please refer to this post to understand this in detail: https://community.hortonworks.com/questions/211807/install-mysql-connector-for-hive-metastore.html Please make sure that you are using the correct MySQL JDBC driver.
Like if you are using HDP 2.6.5 then MySQL 5.7 Database and respective
JDBC driver version needs to be used. MySQL connector 5.1 download link: https://dev.mysql.com/downloads/connector/j/5.1.html So doenload the correct version of MySQL driver and then put it on ambari server host somewhere and then run the following command to setup the JDBC driver. # ambari-server setup --jdbc-db=mysql --jdbc-driver=/path/to/mysql/mysql-connector-java.jar
# ambari-server restart Please make sure to use the correct driver path based on your setup : /path/to/mysql/mysql-connector-java.jar . https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-administration/content/using_ambari_with_mysql.html
... View more
08-27-2018
09:36 AM
1 Kudo
@Michael Bronson Yes, it can be done.
... View more
08-27-2018
09:24 AM
1 Kudo
@Michael Bronson Wonderful !!!!!
... View more
08-27-2018
08:40 AM
1 Kudo
@Michael Bronson Please find my working log4j template content. I just added the following two additional parameters in my DataNode JVm option because i wanted the GC log to get filled up more quickly. -XX:+PrintTenuringDistribution -XX:+PrintAdaptiveSizePolicy {% else %}
SHARED_HADOOP_NAMENODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{namenode_opt_newsize}} -XX:MaxNewSize={{namenode_opt_maxnewsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms{{namenode_heapsize}} -Xmx{{namenode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT"
export HADOOP_NAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 ${HADOOP_NAMENODE_OPTS}"
export HADOOP_DATANODE_OPTS="-server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{dtnode_heapsize}} -Xmx{{dtnode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_DATANODE_OPTS} -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=10K -XX:+PrintTenuringDistribution -XX:+PrintAdaptiveSizePolicy" test-log4j.txt Notice in my working DataNode opts i have the following settings -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=10K -XX:+PrintTenuringDistribution -XX:+PrintAdaptiveSizePolicy .
... View more
08-27-2018
08:25 AM
@Michael Bronson You will need to make sure that you have not edited the GC log file manually. Like you did earlier. it will cause JVM to loose the logging indexing on the GC log file. So if by any chance you used the following command then it wont work # cat gc.log-201808262046 >> gc.log-201808270656.0.current You need a GC log file (Uninterrupted) means you have not touched the file. . Please try restarting this DataNode after deleting the manually edited gc log file. and then keep running the following command for 20-30 times. /usr/jdk64/jdk1.8.0_112/bin/jcmd `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` GC.run .
... View more
08-27-2018
08:09 AM
1 Kudo
@Michael Bronson If you just want to quickly fill the GC log to see the log rotation then try this instead of using "cat" comamand to add content inside the actual current GC log file. # su - hdfs
# /usr/jdk64/jdk1.8.0_112/bin/jcmd `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` GC.run . # SYNTAX:
# $JAVA_HOME/bin/jcmd $DATANODE_PID GC.run
Use the "jcmd" utility to explicitly trigger the GC many times. So after running this command around 20-30 times you will see that the GC log of data node gets rotated and in the current GC log you will see some messages like: 2018-08-27 08:11:07 GC log file has reached the maximum size. Saved as /var/log/hadoop/hdfs/gc.log-201808270803.0 . I see the log rotations works as expected as following: -rw-r--r--. 1 hdfs hadoop 10617 Aug 27 08:12 gc.log-201808270803.1
-rw-r--r--. 1 hdfs hadoop 10615 Aug 27 08:13 gc.log-201808270803.2
-rw-r--r--. 1 hdfs hadoop 10616 Aug 27 08:13 gc.log-201808270803.3
-rw-r--r--. 1 hdfs hadoop 4001 Aug 27 08:13 gc.log-201808270803.4.current .
... View more