Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2825 | 04-27-2020 03:48 AM | |
| 5479 | 04-26-2020 06:18 PM | |
| 4661 | 04-26-2020 06:05 PM | |
| 3702 | 04-13-2020 08:53 PM | |
| 5604 | 03-31-2020 02:10 AM |
03-04-2019
08:30 PM
@Manjunath P N The error indicates that you might have some wanted repos inside your "/etc/yum.repos.d" directory. Which are not reachable/accessible. http://mirror.de.leaseweb.net/epel/7/x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2a00:c98:2030:a034::21: Network is unreachable"
Trying other mirror. Looks like some internet connectivity issue at your end. Either disable those unwanted repos Or fix the internet issue so that you can access them ... Example for testing see if this works? # curl -iLv http://mirror.de.leaseweb.net/epel/7/x86_64/repodata/repomd.xml If the internet connectivity is fine and there is no network proxy issue then, specially the "epel" repos you can disable. To know mkore about how to disable / enable repos please refer to: https://docs.fedoraproject.org/en-US/Fedora/16/html/System_Administrators_Guide/sec-Managing_Yum_Repositories.html
... View more
03-04-2019
10:02 AM
@ibrahima diattara Which version of HDF are you using? There was an issue reported in older version with toolkit heap setting. The default heap setting for encryption utility is -Xms128m -Xmx256m , which is hardcoded in the encrypt-config.sh script file. Please refer to the following SupportKB which might help in fixing your issue. https://community.hortonworks.com/content/supportkb/150267/errorjavalangoutofmemoryerror-java-heap-space-whil.html
... View more
02-28-2019
09:26 PM
@Manjunath P N You will need to Install the KDC server on the KDC host manually on your own as described in https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_security/content/_optional_install_a_new_mit_kdc.html . What Ambari will do ? >>> Once the KDC server is setup completed as per the above doc and it is running fine then you can simply enable the Kerberos from Ambari UI to your cluster. Ambari will be able to install the Kerberos Client packages on all the cluster host (using yum) and it will also setup the "/etc/krb5.conf" on all cluster machines where the kerberos clients need to be setup and will create the keytabs there. https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-security/content/launching_the_kerberos_wizard_automated_setup.html The only thing you will need to manually do it to Install and Setup the KDC server .. so that you can tell ambari where your KDC server is running and what are the Kadmin principals, So that ambari can use that to setup kerberos to your cluster.
... View more
02-27-2019
02:13 AM
@Michael Bronson I do not think it is possible. This is because you are talking about two different File Systems (HDFS and Local FileSystem). If you want to keep syncing your Local Data Directory to HDFS directory then you can make use of some tools like Apache Flume.
... View more
02-26-2019
01:01 AM
@Sami Ahmad You are adding "hadoop classpath" however you will also need to add "hbase classpath" something like following: # javap -cp `hadoop classpath`:`hbase classpath`:.: TestHbaseTable.java .
... View more
02-21-2019
08:59 PM
@sanyun di I could reproduce this issue. This issue can happen when you login to Ambari UI with a username which contains DOT in the username. Example: if username is "test.user" as it has a DOT in the name so you might see this issue. There is an issue reported to the JIRA recently and fixed in later version of ambari. https://issues.apache.org/jira/browse/AMBARI-25102 However the Fix is easy and you ca just update the "app.js" as mentioned in the pull request and your UI will start working fine. Step-1). On Ambari Server host take a backup of the file "/usr/lib/ambari-server/web/javascripts/app.js" to some safe directory. # cp -f /usr/lib/ambari-server/web/javascripts/app.js /tmp/app.js Step-2). Refer to the Pull Request: https://github.com/apache/ambari/pull/2764/files and make the changes in the "app.js" file. Step-3). Just do a Hard refresh of the browser. .
... View more
02-20-2019
10:10 PM
@Giridharan C 1. On the problematic host are you able to start those service components manually? As described in the following doc? https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/administration/content/starting_hdp_services.html 2. Have you restarted ambari-agent on that host and after restart do you see any Error in the "/var/log/ambari-agent/ambari-agent.log" 3. Do you see the new command-xxx.json / output-xxx.txt and errors-xxx.txt kind of files are getting created in the Problematic host in the following location as soon as you trigger the stale config refresh command form ambari UI ? Just check the newly created file as soon as you triger stal config refresh command from the UI. # ls -lart /var/lib/ambari-agent/data/command*.json
# ls -lart /var/lib/ambari-agent/data/output-*.txt
# ls -lart /var/lib/ambari-agent/data/error*.txt . 4. When the Stale Config update fails on the problematic node then in the Ambarti UI operational log do you see any error. Can you please share that complete message which you see in the ambari UI when the stale config refresh was going on? .
... View more
02-20-2019
12:18 PM
@Ilia K While submitting your Spark Job csan you try passing the # spark-submit --master yarn --deploy-mode cluster --conf spark.yarn.maxAppAttempts=1. ............. . Or try setting "yarn.resourcemanager.am.max-attempts" to 1 (default may be 2) in Ambari UI --> yarn --> Configs --> Advanced --> Advanced yarn-site
... View more
02-20-2019
12:07 PM
2 Kudos
@Ilia K Do you want to control the number of attempts? If yes then you might be interested in the following property "spark.yarn.maxAppAttempts" Example: --conf spark.yarn.maxAppAttempts=1 https://spark.apache.org/docs/latest/running-on-yarn.html spark.yarn.maxAppAttempts yarn.resourcemanager.am.max-attempts in YARN The maximum number of attempts that will be made to submit the application. It should be no larger than the global number of max attempts in the YARN configuration.
... View more
02-20-2019
11:21 AM
@Ilia K Spark can be used to interact with Hive. When you install Spark using Ambari, the hive-site.xml file is automatically populated with the Hive metastore location. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_spark-component-guide/content/spark-config-hive.html
... View more