Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2720 | 04-27-2020 03:48 AM | |
| 5280 | 04-26-2020 06:18 PM | |
| 4445 | 04-26-2020 06:05 PM | |
| 3570 | 04-13-2020 08:53 PM | |
| 5377 | 03-31-2020 02:10 AM |
08-13-2018
06:48 AM
@Ibrahim Jarrar HDP-2.6.2 Release Notes references this: HIVE-15081: RetryingMetaStoreClient.getProxy(HiveConf, Boolean)
doesn't match constructor of HiveMetaStoreClient. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_release-notes/content/patch_hive.html
... View more
08-13-2018
06:38 AM
@Ibrahim Jarrar You might be hitting the issue described in below HCC thread : https://community.hortonworks.com/questions/208893/hive-upgrade-tool-failed-at-hdp-3-upgrade.html . HIVE-15081 (https://issues.apache.org/jira/browse/HIVE-15081) which seems to be fixed in HDP 2.6.2. So can you please try to upgrade to HDP-2.6.2/ HDP-2.6.5 first and then try the upgrade to HDP3.
... View more
08-10-2018
06:31 AM
@Zyann Have you following the steps mentioned in the below article to configure SPNEGO for NiFi? https://community.hortonworks.com/articles/34147/nifi-security-user-authentication-with-kerberos.html https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.1/bk_registry-administration/content/kerberos_properties.html
... View more
08-10-2018
05:13 AM
1 Kudo
@Daniel
Zafar
Do you have the following kind of JAR presnet in your cluster? The version might be slightly different in your case. /usr/hdp/3.0.0.0-1634/spark2/aux/spark-2.3.1.3.0.0.0-1634-yarn-shuffle.jar . Do you have the Spark2 Installed to your cluster? Please check your "yarn.nodemanager.aux-services" property of YARN service and then you will find the following value .. it might be including the spark2 shuffle mapreduce_shuffle,spark2_shuffle,{{timeline_collector}} .
... View more
08-09-2018
10:46 PM
1 Kudo
@Victor L Looks like your NameNode is started but it is still in SafeMode hence it will not allow any mutable operations to be performed on HDFS. Thats the reason you are getting this SafeMode related error. In order to validate the same you can try creating a simple directory on HDFS and you might get the same error kind of # su - hdfs -c "hdfs dfs -mkdir /tmp/test" If you want to know if the NameNode is in SafeMode or not then you can run the following command: # su - hdfs -c "hdfs dfsadmin -safemode get" In order to tell NameNode to leave the safe mode you can run the following command: # su - hdfs -c "hdfs dfsadmin -safemode leave" If the NameNode still does not come out of SafeMode then you will need to try restarting the NameNode once and check the NameNode log for any error/warning. # su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh stop namenode"
# su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode"
# tail -f /var/log/hadoop/hdfs/hadoop-hdfs-namenod* . Please share the NN logs if it still does not start cleanly. .
... View more
08-09-2018
10:34 PM
1 Kudo
@Daniel
Zafar
Your NodeManager command execution was fine however the Netstat command did not show any Port Listening on 8042 means the NodeManager was not actually started successfully. # netstat -tnlpa | grep 8042 . Can you please check and share the NM logs. Also regarding Installing NodeManager on other nodes ... it is quite easy and can be done via ambari UI as following: Ambari UI --> Hosts (Tab) --> Click on the desired host link --> Click "Add" button (on the Components Panel) and then choose NodeManager from the drop down . Similarly if you want to delete a NodeManager from a particular host then do the same: Ambari UI --> Hosts (Tab) --> Click on the desired host link --> On the host page Click on the "NodeManager" dropdown menu. After Stopping NodeManager you will see option to "Delete" the NodeManager. .
... View more
08-09-2018
10:15 PM
@Daniel
Zafar
Error indicates that Nodemanager is not started successfully or might be down hence the port 8042 is not accessible. May be you can try starting the NodeManager manually using command line to isolate the issue (if it starts fine without ambari) Because ambari also performs the Nodemager health validation during startup. # su -l yarn -c "/usr/hdp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh start nodemanager" Then verify if the port 8042 is opened or not? # netstat -tnlpa | grep 8042 . Also once the NodeManager is started via command line then please check the NodeManager logs and Free Memory available on the host. Logs: /var/log/hadoop-yarn/yarn/yarn-yarn-nodemanager-*.log<br>/var/log/hadoop-yarn/yarn/yarn-yarn-nodemanager-*.out Memory: # ps -ef | grep `cat /var/run/hadoop-yarn/yarn/yarn-yarn-nodemanager.pid`
# $JAVA_HOME/bin/jmap -heap `cat /var/run/hadoop-yarn/yarn/yarn-yarn-nodemanager.pid`
# free -m . NodeManager can be installed on all cluster nodes as well so that we have more Nodes available from ResourceManager. However for 4 node cluster i would suggest that better to install it on all 4 nodes (or at least 3 nodes). Instsalling NodeManager on a single node might cause very slow processing of your Jobs. .
... View more
08-09-2018
01:33 AM
@Daniel
Zafar
Can you please try to perform a "yum reinstall" on the mentioned package? # yum reinstall smartsense-hst -y . The above mentioned error can occur if the HST installation is corrupted or if it has missing files like: /usr/hdp/share/hst/hst-agent/lib/hst_agent/shell.py
/usr/hdp/share/hst/hst-agent/lib/hst_agent/shell.pyc . Also please share the output of the following command to verify if you see the correct "sys.path" when you run the below command [python -m site] like: Example: # python -m site
sys.path = [
'/root',
'/usr/lib64/python27.zip',
'/usr/lib64/python2.7',
'/usr/lib64/python2.7/plat-linux2',
'/usr/lib64/python2.7/lib-tk',
'/usr/lib64/python2.7/lib-old',
'/usr/lib64/python2.7/lib-dynload',
'/usr/lib64/python2.7/site-packages',
'/usr/lib/python2.7/site-packages',
]
USER_BASE: '/root/.local' (doesn't exist)
USER_SITE: '/root/.local/lib/python2.7/site-packages' (doesn't exist)
ENABLE_USER_SITE: True .
... View more
08-08-2018
10:36 PM
2 Kudos
@Michael Bronson MapReduce uses RFA appender. Which is defined inside the "Advanced hdfs-log4j" Ambari UI --> HDFS --> Configs --> Advanced --> "Advanced hdfs-log4j" --> hdfs-log4j template (Text area) You will find something like this. By default you can see it like following: #
# Rolling File Appender
#
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}
# Logfile size and and 30-day backups
log4j.appender.RFA.MaxFileSize={{hadoop_log_max_backup_size}}MB
log4j.appender.RFA.MaxBackupIndex={{hadoop_log_number_of_backup_files}}
log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} - %m%n
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n NOTE:
However making changes to "RFA" appender will not be good because this
appender is also being used by many other components. So if you want just want to limit the number of logfiles based on size for MapReduce hostory server only then you can try this : 1. Create a new Appender like "RFA_MAPRED" inside the somewhere just below the RFA appender which we posted above. Ambari UI --> HDFS --> Configs --> Advanced --> "Advanced hdfs-log4j" --> hdfs-log4j template (Text area) Add "RFA_MAPRED" as following: #
# MapReduce Rolling File Appender (ADDED MANUALLY)
#
log4j.appender.RFA_MAPRED=org.apache.log4j.RollingFileAppender
log4j.appender.RFA_MAPRED.File=${hadoop.log.dir}/${hadoop.log.file}
log4j.appender.RFA_MAPRED.MaxFileSize=100MB
log4j.appender.RFA_MAPRED.MaxBackupIndex=3
log4j.appender.RFA_MAPRED.layout=org.apache.log4j.PatternLayout
log4j.appender.RFA_MAPRED.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} - %m%n
log4j.appender.RFA_MAPRED.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n 2. Now Save the config changes. Click Save Button. 3. Now we need to edit the "Advanced mapred-env" Ambari UI --> MapReduce --> Configs --> Advanced --> "Advanced mapred-env" --> mapred-env template Change the following line to use the "RFA_MAPRED" appender instead of default "RFA" (Single line change) Old value: export HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA New value: export HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA_MAPRED 4. Save and restart all required services to get this change reflected. 5. You can also verify that MapReduce is now using the correct Appender by running the following Command on Job History Server host. # ps -ef | grep JobHistoryServer . You should see -Dhadoop.root.logger=DEBUG,RFA_MAPRED in the output of the above command.
... View more
08-08-2018
01:57 AM
1 Kudo
@Harry Li Please try this to verify your JDBC driver version: On Ambari Server Host: # mkdir /tmp/JDBC
# cd /tmp/JDBC
# cp -f /var/lib/ambari-server/resources/mysql-connector-java.jar /tmp/JDBC/
# jar xvf mysql-connector-java.jar
. The grep the version: # grep 'Implementation-Versio' META-INF/MANIFEST.MF
Implementation-Version: 8.0.11
# cat META-INF/services/java.sql.Driver
com.mysql.cj.jdbc.Driver . So if your MySQL JDBC Driver version is correct (MySQL 5 JDBC Driver then you might be seeing something like following instead: # grep 'Implementation-Versio' META-INF/MANIFEST.MF
Implementation-Version: 5.1.25-SNAPSHOT
# cat META-INF/services/java.sql.Driver
com.mysql.jdbc.Driver . MySQL connector 5.1 download link: https://dev.mysql.com/downloads/connector/j/5.1.html
... View more