Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2452 | 04-27-2020 03:48 AM | |
4890 | 04-26-2020 06:18 PM | |
3977 | 04-26-2020 06:05 PM | |
3222 | 04-13-2020 08:53 PM | |
4928 | 03-31-2020 02:10 AM |
03-14-2019
01:44 AM
@Deepak SANAGAPALLI The following kind of message indicates some filesystem issue: Unable to access /var/log/ambari-server directory. Confirm the directory is created and is writable by Ambari Server user account 'root' . So please check if you have the mentioned directory created and you are able to create some dummy files inside that dir. # echo "Test" > /var/log/ambari-server/test.log
# mkdir -p /var/log/ambari-server/one Check if somehow the "/var/log" filesystem has become read-only by any chance? May be rebooting the host will help in such cases. Also please check if the file "ambari.properties" is showing correct username there: # grep 'ambari-server.user' /etc/ambari-server/conf/ambari.properties
ambari-server.user=root .
... View more
03-14-2019
01:35 AM
1 Kudo
@Jes Chergui Hortonworks Sandbox is a learning single host cluster setup for testing and learning purpose. Thats the reason a hostname was chosen that way. Sandbox is a dockerized container. So is there any specific reason you would like to change/customize the Hostname ? You can do that but we wanted to know what is the requirement ? Ambari allows to perform the hostname changes to new hostname as described in the following doc. So when you make the required hostname changes at the docker & "/etc/hosts" file then you might also want to check the following doc: # ambari-server update-host-names host_names_changes.json https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.0/bk_ambari-administration/content/ch_changing_host_names.html Followed by the instructions mentioned for formatZK and HDFS.
... View more
03-12-2019
11:07 AM
@Ruslan Fialkovsky What is the age of your ambari managed cluster (like 6 monthds old / 1 year old / 2 year old) ? After months of operation on larger clusters, the Ambari Server may begin to accrue a large amount of historical data in the database. This can cause UI performance degradation. In some old clusters we see that there are lots of old "alert_history" (old operational logs) or old alert notification data entries present in the database that causes slowness, As with time these entries grows much on the database. So the DB dump size also grows and the DB queries can respond slow results. We can use the following command to perform some DB cleanup. # ambari-server db-purge-history --cluster-name DataLake --from-date 2018-12-15 Reference: https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-administration/content/purging-ambari-server-history.html .
... View more
03-12-2019
09:02 AM
2 Kudos
@Ruslan Fialkovsky The following HCC article explains some of the points to check for Cluster API slow response scenario. Can you please check if that is helpful for you to troubleshoot the issue? Section: Ambari API Response Time Check https://community.hortonworks.com/articles/131670/ambari-server-performance-tuning-troubleshooting-c.html
... View more
03-04-2019
08:30 PM
@Manjunath P N The error indicates that you might have some wanted repos inside your "/etc/yum.repos.d" directory. Which are not reachable/accessible. http://mirror.de.leaseweb.net/epel/7/x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2a00:c98:2030:a034::21: Network is unreachable"
Trying other mirror. Looks like some internet connectivity issue at your end. Either disable those unwanted repos Or fix the internet issue so that you can access them ... Example for testing see if this works? # curl -iLv http://mirror.de.leaseweb.net/epel/7/x86_64/repodata/repomd.xml If the internet connectivity is fine and there is no network proxy issue then, specially the "epel" repos you can disable. To know mkore about how to disable / enable repos please refer to: https://docs.fedoraproject.org/en-US/Fedora/16/html/System_Administrators_Guide/sec-Managing_Yum_Repositories.html
... View more
03-04-2019
10:13 AM
@Ilia K You might be interested in the following two properties. Please check the below doc if you are looking out for mentioned properties? https://spark.apache.org/docs/2.2.0/configuration.html "spark.dynamicAllocation.enabled": Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload. (default value: false) "spark.dynamicAllocation.maxExecutors": Upper bound for the number of executors if dynamic allocation is enabled. (Default Value: infinity)
... View more
03-04-2019
10:02 AM
@ibrahima diattara Which version of HDF are you using? There was an issue reported in older version with toolkit heap setting. The default heap setting for encryption utility is -Xms128m -Xmx256m , which is hardcoded in the encrypt-config.sh script file. Please refer to the following SupportKB which might help in fixing your issue. https://community.hortonworks.com/content/supportkb/150267/errorjavalangoutofmemoryerror-java-heap-space-whil.html
... View more
02-28-2019
09:26 PM
@Manjunath P N You will need to Install the KDC server on the KDC host manually on your own as described in https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_security/content/_optional_install_a_new_mit_kdc.html . What Ambari will do ? >>> Once the KDC server is setup completed as per the above doc and it is running fine then you can simply enable the Kerberos from Ambari UI to your cluster. Ambari will be able to install the Kerberos Client packages on all the cluster host (using yum) and it will also setup the "/etc/krb5.conf" on all cluster machines where the kerberos clients need to be setup and will create the keytabs there. https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-security/content/launching_the_kerberos_wizard_automated_setup.html The only thing you will need to manually do it to Install and Setup the KDC server .. so that you can tell ambari where your KDC server is running and what are the Kadmin principals, So that ambari can use that to setup kerberos to your cluster.
... View more
02-27-2019
02:13 AM
@Michael Bronson I do not think it is possible. This is because you are talking about two different File Systems (HDFS and Local FileSystem). If you want to keep syncing your Local Data Directory to HDFS directory then you can make use of some tools like Apache Flume.
... View more
02-26-2019
01:01 AM
@Sami Ahmad You are adding "hadoop classpath" however you will also need to add "hbase classpath" something like following: # javap -cp `hadoop classpath`:`hbase classpath`:.: TestHbaseTable.java .
... View more