Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1917 | 06-15-2020 05:23 AM | |
| 15459 | 01-30-2020 08:04 PM | |
| 2071 | 07-07-2019 09:06 PM | |
| 8104 | 01-27-2018 10:17 PM | |
| 4569 | 12-31-2017 10:12 PM |
11-01-2017
04:39 AM
ok I run it , and now we get - WARN AbstractLifeCycle: FAILED ServerConnector@14a54ef6{HTTP/1.1}{0.0.0.0:18081}: java.net.BindException: Address already in use
java.net.BindException: Address already in use
... View more
11-01-2017
04:18 AM
we start the spark history as the following
/usr/hdp/2.6.0.3-8/spark2/sbin/start-history-server.sh from the log
spark-root-org.apache.spark.deploy.history.HistoryServer-1-master01 we get many lines like that
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=READ, inode="/spark2-history/application_1497173286109_0003_2.inprogress":hdfs:hadoop:-rwxrwx---
we try
hdfs dfs -chown spark /spark2-history but not help
not sure if we need to see the folder ? # ls /spark2-history/
ls: cannot access /spark2-history/: No such file or directory
please advice what is solution in order to strat the spark history
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Spark
10-31-2017
08:00 PM
spark history server not started and from the logs under /var/log/spark2 we see the following 17/10/31 21:00:23 ERROR FsHistoryProvider: Exception encountered when attempting to load application log hdfs://hdfsha/spark2-history/application_1507958402099_0822_1
org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=READ, inode="/spark2-history/application_1507958402099_0822_1":bipflop:hadoop:-rwxrwx---
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1955)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1913)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2000)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1969)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1882)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:699)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:376)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345) please advice how to resolve this ? do you think that we need to do restore to postgresql from backup in order to solve this?
... View more
Labels:
10-30-2017
11:57 AM
yes the parameter is - log4j.appender.DRFA.MaxBackupIndex=30 , and we already restart the hive service , but still files under /var/log/hive not deleted , what are the other checked that we need to do here ? , and what is the frequency that proccess need to delete the files ?
... View more
10-30-2017
11:37 AM
we have ambari cluster with HIVE service the ambari configuration required to delete the files under /var/log/hive , if files with the same name are more then 30 from HIVE config hive_log_maxbackupindex=30 but when we access the master machines , we see more then 60 files with the same name example: cd /var/log/hive ls -ltr | grep hivemetastore | wc -l 61 ls -ltr | grep hiveserver2 | wc -l 61 we also remove the remark from the line log4j.appender.DRFA.MaxBackupIndex , and restart the hive service but this not help us please advice what could be the problem ? example of files under /var/log/hive -rw-r--r--. 1 hive hadoop 2752 Sep 2 19:05 hivemetastore-report.json
-rw-r--r--. 1 hive hadoop 2756 Sep 2 19:05 hiveserver2-report.json -rw-r--r--. 1 hive hadoop 636678 Sep 2 23:58 hiveserver2.log.2017-09-02
-rw-r--r--. 1 hive hadoop 1127874 Sep 2 23:59 hivemetastore.log.2017-09-02
-rw-r--r--. 1 hive hadoop 2369407 Sep 3 23:58 hiveserver2.log.2017-09-03 . . . .
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
10-26-2017
05:53 PM
I am little confuse in the value from yarn.nodemanager.resource.memory-mb is more then 120G and each worker machine only 32G , so how it can be that yarn.nodemanager.resource.memory-mb value much more the memory size of the worker machine ?
... View more
10-26-2017
05:52 PM
I am little confuse in the value from yarn.nodemanager.resource.memory-mb is more then 120G and each worker machine only 32G , so how it can be that yarn.nodemanager.resource.memory-mb value much more the memory size of the worker machine ?
... View more
10-26-2017
04:18 PM
we have Ambari cluster with 5 workers machines ( each worker have 32G memory ) in the dashboard we see the yarn memory like this: 77% used 575G of 746G seems that the running application take 575G but how 575G if total workers have 32X5=160G and how the calculte is 746G ? on total ?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache YARN
10-26-2017
01:49 PM
Jay can you please help with the case - https://community.hortonworks.com/questions/142356/how-to-recover-the-standby-name-node-in-ambari-clu.html
... View more
10-26-2017
01:46 PM
sorry you right
... View more