Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2452 | 04-27-2020 03:48 AM | |
4890 | 04-26-2020 06:18 PM | |
3977 | 04-26-2020 06:05 PM | |
3222 | 04-13-2020 08:53 PM | |
4928 | 03-31-2020 02:10 AM |
02-21-2019
08:59 PM
@sanyun di I could reproduce this issue. This issue can happen when you login to Ambari UI with a username which contains DOT in the username. Example: if username is "test.user" as it has a DOT in the name so you might see this issue. There is an issue reported to the JIRA recently and fixed in later version of ambari. https://issues.apache.org/jira/browse/AMBARI-25102 However the Fix is easy and you ca just update the "app.js" as mentioned in the pull request and your UI will start working fine. Step-1). On Ambari Server host take a backup of the file "/usr/lib/ambari-server/web/javascripts/app.js" to some safe directory. # cp -f /usr/lib/ambari-server/web/javascripts/app.js /tmp/app.js Step-2). Refer to the Pull Request: https://github.com/apache/ambari/pull/2764/files and make the changes in the "app.js" file. Step-3). Just do a Hard refresh of the browser. .
... View more
02-21-2019
12:32 PM
@rinu shrivastav If you are using Ambari then it is more easy ... as soon as you create a user on HDFS it will automatically create the Home Directory for that user on HDFS and set the permission accordingly. You can enable this feature via ambari as mentioned in the doc: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.0/bk_ambari-administration/content/create_user_home_directory.html
... View more
02-21-2019
12:31 PM
1 Kudo
@rinu shrivastav Example: Suppose you want to create a new user "newuser1" on HDFS. So first create a user on your Host (client machine) and add it to "hadoop" group # useradd newuser1 -G hadoop . Create Home directory for this user on HDFS. # su - hdfs -c "hdfs dfs -mkdir /user/newuser1"
# su - hdfs -c "hdfs dfs -chown newuser1:hadoop /user/newuser1"
# su - hdfs -c "hdfs dfs -chmod 755 /user/newuser1" . Now you can switch to this "newuser1" and then can work on hdfs like putting a file to HDFS. # su - newuser1
# hdfs dfs -put /etc/passwd /user/newuser1
# hdfs dfs -cat /user/newuser1/passwd
... View more
02-20-2019
10:10 PM
@Giridharan C 1. On the problematic host are you able to start those service components manually? As described in the following doc? https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/administration/content/starting_hdp_services.html 2. Have you restarted ambari-agent on that host and after restart do you see any Error in the "/var/log/ambari-agent/ambari-agent.log" 3. Do you see the new command-xxx.json / output-xxx.txt and errors-xxx.txt kind of files are getting created in the Problematic host in the following location as soon as you trigger the stale config refresh command form ambari UI ? Just check the newly created file as soon as you triger stal config refresh command from the UI. # ls -lart /var/lib/ambari-agent/data/command*.json
# ls -lart /var/lib/ambari-agent/data/output-*.txt
# ls -lart /var/lib/ambari-agent/data/error*.txt . 4. When the Stale Config update fails on the problematic node then in the Ambarti UI operational log do you see any error. Can you please share that complete message which you see in the ambari UI when the stale config refresh was going on? .
... View more
02-20-2019
12:18 PM
@Ilia K While submitting your Spark Job csan you try passing the # spark-submit --master yarn --deploy-mode cluster --conf spark.yarn.maxAppAttempts=1. ............. . Or try setting "yarn.resourcemanager.am.max-attempts" to 1 (default may be 2) in Ambari UI --> yarn --> Configs --> Advanced --> Advanced yarn-site
... View more
02-20-2019
12:07 PM
2 Kudos
@Ilia K Do you want to control the number of attempts? If yes then you might be interested in the following property "spark.yarn.maxAppAttempts" Example: --conf spark.yarn.maxAppAttempts=1 https://spark.apache.org/docs/latest/running-on-yarn.html spark.yarn.maxAppAttempts yarn.resourcemanager.am.max-attempts in YARN The maximum number of attempts that will be made to submit the application. It should be no larger than the global number of max attempts in the YARN configuration.
... View more
02-20-2019
11:21 AM
@Ilia K Spark can be used to interact with Hive. When you install Spark using Ambari, the hive-site.xml file is automatically populated with the Hive metastore location. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_spark-component-guide/content/spark-config-hive.html
... View more
02-20-2019
11:09 AM
1 Kudo
@Ilia K Bit correction here. I just checkecd that Spark2 has a dependency on HIve Service. # grep -A1 -B1 'HIVE' /var/lib/ambari-server/resources/common-services/SPARK2/2.0.0/metainfo.xml
<dependency>
<name>HIVE/HIVE_METASTORE</name>
<scope>cluster</scope> Hence you should not delete the HIVE service .. but if you are not using Hive then just Stop the "Hive Service" components that way it will not use RAM and put the Hive Service in Maintenance Mode.
... View more
02-20-2019
10:59 AM
@Ilia K It indicates that your Cluster might not have enough resource Or you might be running some unwanted services to your cluster. Either increase resources to your cluster nodes like RAM ... Or remove unwanted services from the cluster So that the containers can be started bit fast.
... View more
02-20-2019
10:54 AM
@Ilia K If you are not using Hive Service (HIve Metastore / HiveServer2 / Hive Client) then you can remove it. Also AMS collector which internally starts an HBase instance to store the cluster & services related metrics data ... So if you are not much interested in Metrics Data then you can also Delete the AMS service (AMS Collector / Grafana / Metrics Monitors).
... View more