Member since
07-11-2017
2
Posts
2
Kudos Received
0
Solutions
06-08-2018
07:58 PM
1 Kudo
This is a bug in Ambari. You can fix it by patching the upgrade script directly. (Posting here with my solution after suffering from this myself.) Edit /var/lib/ambari-agent/cache/common-services/YARN/your_YARN_version/package/scripts/nodemanager_upgrade.py on your NodeManager hosts: At the top of the file with the other imports (line 20?), add: import re After line 65, add: hostname_short = re.findall(r'(^\w+)\.', hostname)[0] Change line 71 to the following: if hostname in yarn_output or nodemanager_address in yarn_output or hostname_ip in yarn_output or hostname_short in yarn_output: The upgrade will now properly check for short hostnames when you hit "Retry".
... View more
07-11-2017
04:19 PM
1 Kudo
Spark2 History Server is writing truly massive logfiles to /var/log/spark2 (on the order of 20-30GB). I'd like to redirect these to /dev/null. How do I change the log location? For the curious, the content of these logfiles (/var/log/spark2/spark-spark-org.apache.spark.deploy.history.HistoryServer-1-hdp001.cac.queensu.ca.out) is just purely the following: 17/07/11 11:00:07 ERROR FsHistoryProvider: Exception encountered when attempting to load application log hdfs://<somehostname>:8020/spark2-history/application_1494957845701_0008.inprogress
org.apache.hadoop.security.AccessControlException: Permission denied: user=spark, access=READ, inode="/spark2-history/application_1494957845701_0008.inprogress":zeppelin:hadoop:-rwxrwx---
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1955)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939)
[snip]
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=spark, access=READ, inode="/spark2-history/application_1494957845701_0008.inprogress":zeppelin:hadoop:-rwxrwx---
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1955)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1913)
[snip]
Although fixing the underlying issue would be nice, right now I'd settle for just changing the log location to /dev/null so it doesn't constantly fill up the root partition on that machine.
... View more
Labels:
- Labels:
-
Apache Spark