Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2448 | 04-27-2020 03:48 AM | |
4885 | 04-26-2020 06:18 PM | |
3976 | 04-26-2020 06:05 PM | |
3220 | 04-13-2020 08:53 PM | |
4926 | 03-31-2020 02:10 AM |
06-05-2019
08:37 AM
@Rupak Dum Are you sure that you have installed HDP (not HDF, because HDF does not provide hdfs by default). Based on the error it looks like ambari is not able to find the HDFS settings (core-site / hdfs-site) So are you sure that HDFS is installed in your cluster? java.lang.NullPointerException
at org.apache.ambari.server.view.ClusterImpl.getConfigByType(ClusterImpl.java:72)
at org.apache.ambari.view.utils.hdfs.ConfigurationBuilder.copyPropertiesBySite(ConfigurationBuilder.java:223)
at org.apache.ambari.view.utils.hdfs.ConfigurationBuilder.buildConfig(ConfigurationBuilder.java:358) As File View will refer to the core-site and hdfs-site details using ambari cluster APIs to get the HDFS details when you try to open the File View.
... View more
06-04-2019
06:42 AM
@John It looks like it is complaining about the JDBC driver. So can you please try this: SEVERE: The web application [/oozie] registered the JDBC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. It can happen if the Oozie Server was not stopped properly, So please try to Kill the Oozie process properly and then restart it freshly. Also can you please check what is the "mysql" jdbc driver version present here "/usr/hdp/current/oozie-server/oozie-server/webapps/oozie/WEB-INF/lib/" May be you can try tp updated the jdbc driver jar inside "/usr/hdp/current/oozie-server/libext/" and fthen prepare new war file. 1. Stop oozie server 2. Run below command # /usr/hdp/current/oozie-server/bin/oozie-setup.sh prepare-war 3. Start Oozie server. . Still if it fails then can you please share the full log
... View more
06-04-2019
03:55 AM
@Rohit Sharma In addition to Geoffrey's comment, Can you also tail the "/var/log/ambari-server/ambari-server.log" and then try to hit the Ambari UI Alert page again and then see if it shows any WARNING / Error there? When you hit the Alert page in Ambari UI then do you see any error/failure in the Browser debugger console? Chrome Browser Menu --> More Tools --> Developer Tools --> Console (Tab) Also please try to open browser in Incognito mode in order to isolate if it has any Browser caching issue? Also it will be good to see if your Ambari Server is having enough resources like it has sufficient menory..etc. You can refer to the following doc to find out how to check the AmbariServer memory settings and see the current memory usage: https://community.hortonworks.com/articles/131670/ambari-server-performance-tuning-troubleshooting-c.html /usr/jdk64/jdk1.8.0_112/bin/jmap -heap $AMBARI_SERVER_PID .
... View more
06-03-2019
10:57 PM
1 Kudo
@Bala Kolla What is the exact ambari version that you are using? One of the reason may be due to change in the repository. Looks like some of the packages might be installed from a different repo. May be you can try checking if there are some packages installed from multiple HDP repositories? # repoquery -a --installed --qf "%{ui_from_repo} %{name}" | grep -i '^@hdp' Also if the you just changed the REPO url (not the actual HDP version) then may be you can try this. (Not sure what is your Ambari version , and hence can not tell for sure if the following may work or not) 1. On the ambari agent node where the client installation is failing try to search for the file "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py" 2. Then try to comment a line in this file which is setting package_version = None Example: # grep 'package_version = None' /usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py
package_version = None 3. After commenting out that line your file grep output should look like following: # grep 'package_version = None' /usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py
# package_version = None 4. Remove the "script.pyo and script.pyc" files. (or move it to /tmp or some other directory) # rm -f /usr/lib/ambari-agent/lib/resource_management/libraries/script/script.pyc
# rm -f /usr/lib/ambari-agent/lib/resource_management/libraries/script/script.pyo 5. Restart ambari-agent on that node. # ambari-agent restart. 6. Try starting component on this node from ambari UI and then see if it still fails? If it still fails then it will require further investigation.
... View more
06-03-2019
10:36 PM
@Monalisa Tripathy The mentioned property should go inside mapred-site.xml. Via Ambari UI --> MapReduce2 --> Configs --> Advanced --> "Advanced mapred-site". you will find a property by default set to INFO as following there you can change it to DEBUG: yarn.app.mapreduce.am.log.level=INFO .
... View more
06-02-2019
11:08 PM
@Malleh Ceesay Good to knwo that it worked for you. It will be great if you can mark this HCC thread as answered by clicking "Accept" link on the right answer.
... View more
06-02-2019
07:47 AM
1 Kudo
@Malleh Ceesay The URL which you are typing is incorrect the substring in the URL "ambari_common/tasks.logrotate.yml" should be actually "ambari_common/tasks/logrotate.yml" Example: # wget -O /etc/logrotate.d/metron-ambari https://raw.githubusercontent.com/apache/metron/master/metron-deployment/ansible/roles/ambari_common/tasks/logrotate.yml .
... View more
06-02-2019
07:41 AM
@Monalisa Tripathy Is this what you are looking out for? tez.am.log.level => Root logging level passed to the Tez Application Master. yarn.app.mapreduce.am.log.level => The logging level for the MR ApplicationMaster. The allowed levels are: OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL
... View more
05-14-2019
04:52 AM
1 Kudo
@duong tuan anh We see the error as following: org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /kafka-logs. A Kafka instance in another process or thread is using this director Hence we are suspecting that the "" dir may not be a Local Directory for the Kafka Broker. Ideally the "log.dirs=/kafka-logs" should be a Local Directory on the Kafka host (Not a Shared Directory) Please check your kafka hosts if by any chance more than one Kafka Hosts are sharing the direcotery "/kafka-logs" ? If yes then change it to a Local path instead of a Shared Direcrtoy path. # df –h /kafka-logs .
... View more
05-12-2019
10:51 PM
@Madhura Mhatre Can you please run the following query inside your MySQL DB so that we can findout which database ( ambaridb/rangerdb/hivedb) is actually utilizing more memory? Please share the output. mysql> SELECT table_schema "DB Name", ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB" FROM information_schema.tables GROUP BY table_schema; . then accordingly we can check if there is a possibility to clear some unwanted data. Like in case of ambaridb it may be the old unwanted alerts and operational data that might be consuming most of the db space. In such cases Ambari Provides a utility to purge the historical unwanted data: # ambari-server db-purge-history --cluster-name YOUR_CLUSTER_NAME --from-date 2018-08-01 https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/administering-ambari/content/amb_purge_ambari_server_database_history.html . Also to reclaim diskspace used by MySQL DB sometimes you might need to do "OPTIMIZE TABLE <tablename>" on the DB https://www.percona.com/blog/2013/09/25/how-to-reclaim-space-in-innodb-when-innodb_file_per_table-is-on/
... View more