Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2730 | 04-27-2020 03:48 AM | |
| 5288 | 04-26-2020 06:18 PM | |
| 4458 | 04-26-2020 06:05 PM | |
| 3584 | 04-13-2020 08:53 PM | |
| 5385 | 03-31-2020 02:10 AM |
10-04-2017
06:39 AM
@Burak Bicen I will suggest opening a separate thread will be more useful. As this is slightly unrelated question to this thread. Even though the topic is related to Druid. That way you might get more accurate response.
... View more
10-04-2017
03:18 AM
1 Kudo
@Michael Coffey The problem seems to be this: 2017-10-02T19:56:25.684428Z 0 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306
2017-10-02T19:56:25.684440Z 0 [Note] - '127.0.0.1' resolves to '127.0.0.1';
2017-10-02T19:56:25.684464Z 0 [Note] Server socket created on IP: '127.0.0.1'. . Your MySQL is starting and listening only on "127.0.0.1" bind_addr. You should edit the "bind-address" attribute inside your "/etc/my.cnf" to make it bind on hostname or all listen address. bind-address=0.0.0.0 . https://dev.mysql.com/doc/refman/5.7/en/server-options.html If the address is 0.0.0.0 , the server accepts TCP/IP connections on all server host IPv4 interfaces. If the address is :: , the server accepts TCP/IP connections on all server host IPv4 and IPv6 interfaces.
... View more
10-03-2017
08:16 PM
@Michael Coffey The mysql log location is usually defined inside the MySQL config file "/etc/my.cnf" # grep 'log-error' /etc/my.cnf
log-error=/var/log/mysqld.log
. MySQL log can be really helpful to findout why it is failing with Communications link failure
... View more
10-03-2017
06:22 PM
@Michael Coffey Can you please check and share the output of "mysqld.log" file so that we see if it has some issues. # less /var/log/mysqld.log
.
... View more
10-03-2017
02:09 PM
@Guillaume Roger 1. Please check if the parameter 'tez.am.view-acls' value is set to * (not blank) Ambari UI -> MapReduce2 -> Configs -> Advanced tez-site --> Add --> tez.am.view-acls = * 2. Also please check if the value of "yarn.timeline-service.entity-group-fs-store.group-id-plugin-classes" in yarn-site.xml is set to "org.apache.tez.dag.history.logging.ats.TimelineCachePluginImpl" 3. Also for MR jobs please check if the property is set correctlt "mapreduce.job.acl-view-job" Ambari UI -> MapReduce2 -> Configs -> Add --> mapreduce.job.acl-view-job = * 4. Is this a kerberized Cluster?
... View more
10-03-2017
06:37 AM
@darkz yu The Error indicates that your "/etc/hadoop/conf/hadoop-metrics2.properties" file does not have the property with exact name as "namenode.sink.timeline.collector.hosts" The Example file should look like following: # grep 'sink.timeline.collector.hosts' /etc/hadoop/conf/hadoop-metrics2.properties
datanode.sink.timeline.collector.hosts=amb25102.example.com
resourcemanager.sink.timeline.collector.hosts=amb25102.example.com
nodemanager.sink.timeline.collector.hosts=amb25102.example.com
jobhistoryserver.sink.timeline.collector.hosts=amb25102.example.com
journalnode.sink.timeline.collector.hosts=amb25102.example.com
maptask.sink.timeline.collector.hosts=amb25102.example.com
reducetask.sink.timeline.collector.hosts=amb25102.example.com
applicationhistoryserver.sink.timeline.collector.hosts=amb25102.example.com Notice there is "nodemanager.sink.timeline.collector.hosts" (hosts) string added to those properties. If you have upgraded ambari from 2.2 version to ambari 2.5.1 then please double check if you have followed the Ambari Post Upgrade Steps. The Ambari Post upgrade steps involved Upgrading the Ambari Metrics Services as well like: https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-upgrade/content/upgrade_ambari_metrics.html Other Post upgrade Steps are mentioned here : https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-upgrade/content/post_ambari_upgrade_tasks.html Can you please share the output of the following command from AMS collector host to see if the AMS binaries were properly upgraded or not? # rpm -qa | grep ambari
ambari-server-2.5.1.0-159.x86_64
ambari-metrics-monitor-2.5.1.0-159.x86_64
ambari-agent-2.5.1.0-159.x86_64
ambari-metrics-hadoop-sink-2.5.1.0-159.x86_64
ambari-metrics-grafana-2.5.1.0-159.x86_64 . .
... View more
10-03-2017
05:23 AM
@Michael Coffey We see the following error: Metastore connection URL: jdbc:mysql://hadoop12.xxxxx.com/hive?createDatabaseIfNotExist=true Metastore Connection Driver : com.mysql.jdbc.Driver Metastore connection User: hive
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
Underlying cause: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException : Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. SQL Error code: 0 org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:80)
.
.
org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:76)
... 11 more Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) . Basically this is MySQL connection issue. So you should check the couple of things first: 1). The machine where you are running the MySQL please check the following, If the MySQL is running: # ssh root@hadoop12.neocortix.com
# ps -ef | grep mysql 2). Check if the Port of MySQL is opened or not? By Passing the MySQL server PID to the following command : # netstat -tnlpa | grep `cat /var/run/mysqld/mysqld.pid`
tcp6 0 0 :::3306 :::* LISTEN 1235/mysqld
3). Check if using the following kind of Java Utility you are able to establish the JDBC connection to the MySQL from the ambari-server host AND from the same host where mySQL is installed? # /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/share/java/mysql-connector-java.jar -Djava.library.path=/var/lib/ambari-agent/cache org.apache.ambari.server.DBConnectionVerification "jdbc:mysql://hadoop12.xxxxx.com/hive" "hive" "hive" com.mysql.jdbc.Driver
**NOTE:** The JAR "DBConnectionVerification.jar" is provided by Ambari for DB connection check. You will need to change the JAVA path in the above command. Also please use the correct Hostname (and change : hadoop12.xxxxx.com to correct one) to verify the connectivity. (I have masked the hostname) Above will isolate the issue of MySQL DB connectivity. Usually the "CommunicationsException : Communications link failure" error indicates that there is either some MySQL DB server issue or the N/W issue. . .
... View more
10-03-2017
04:59 AM
@Yair Ogen Good to know that it works now. It will be great if you can mark this HCC Thread as "Accepted" (Answered) so that other HCC users can quickly find the solution for this issue , instead of reading the whole thread.
... View more
10-03-2017
04:27 AM
@Yair Ogen In this case we see that user "hive" is actually trying to write data inside the "" directory Permission denied: user=hive, access=WRITE, inode="/user/admin/MOCK_DATA.csv":admin:hadoop:drwxr-xr-x . So either you should give write access to the "hive" user on the mentioned directory "/user/admin/" As we see that it does not have the "WRITE" permission Or you should run the job using "admin" user with the following setup: If you want to run as "admin" # hdfs dfs -chown admin:hadoop /user/admin
# hdfs dfs -chmod 777 /user/admin . Or else if you want to run the hive job using "hive" user then you should change the ownership to "hive" user and the permission accordingly. # hdfs dfs -chown hive:hadoop /user/admin
# hdfs dfs -chmod 777 /user/admin .
... View more
10-03-2017
02:39 AM
@Prakash Punj You can refer to the following article which explains some very common areas to look for when we see the Slowness on Ambari: https://community.hortonworks.com/articles/131670/ambari-server-performance-tuning-troubleshooting-c.html . If your Ambari Cluster is bit old then it is highly possible that the DB size has grown a bit, So many times "db-cleanup" helps to improve the performance much. From Ambari 2.5.2 Onwards: From Ambari 2.5.2 onwards the name of this operation will be changed to "db-purge-history" and
apart from the Alert related tables it should also consider of other
tables lie host_role_command and execution_commands and if there is any
other tables as well. # ambari-server db-purge-history --cluster-name Prod --from-date 2017-08-01
See: https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-administration/content/purging-ambari-server-history.html .
... View more