Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1970 | 06-15-2020 05:23 AM | |
| 16069 | 01-30-2020 08:04 PM | |
| 2108 | 07-07-2019 09:06 PM | |
| 8249 | 01-27-2018 10:17 PM | |
| 4674 | 12-31-2017 10:12 PM |
11-05-2017
08:21 AM
from some unclear reason when we start the ambari agent on master machine its failed from the log we can see that: ERROR 2017-10-02 11:58:42,597 script_alert.py:123 - [Alert][hive_server_process] Failed with result CRITICAL: ['Connection failed on host machine-master01.pop.com:10000 (Traceback (most recent call last):\n File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/alerts/alert_hive_thrift_port.py", line 211, in execute\n
ldap_password=ldap_password)\n File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/hive_check.py", line 79, in check_thrift_port_sasl\n timeout=check_command_timeout)\n File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__\n self.env.run()\n File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160,
in run\n what cause this problem?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
11-01-2017
10:40 AM
we get many error like EmulatedXAResource@64deb58f, error code TMNOFLAGS and transaction: [DataNucleus Transaction, ID=Xid=#, enlisted resources=[]]
2017-11-01 10:35:20,363 DEBUG [main]: DataNucleus.Transaction (Log4JLogger.java:debug(58)) - Running enlist operation on resource: org.datanucleus.store.rdbms.ConnectionFactoryImpl$EmulatedXAResource@64deb58f, error code TMNOFLAGS and transaction: [DataNucleus Transaction, ID=Xid=#, enlisted resources=[]]
javax.jdo.JDODataStoreException: Error executing SQL query "select "DB_ID" from "DBS"".
java.sql.SQLSyntaxErrorException: Table/View 'DBS' does not exist.
Caused by: ERROR 42X05: Table/View 'DBS' does not exist. java.lang.RuntimeException: Error applying authorization policy on hive configuration: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
... View more
11-01-2017
09:49 AM
I run the command but it stuck for along time , regarding the hive-server.pid we have only the file - ls /var/run/hive/
hive.pid , bytheway hive server is running on master03 machine , but not on master01
... View more
11-01-2017
09:06 AM
we try to start the hive server on master01 machine as the following: [root@master01 hive]# su hive [hive@master01 hive]$ /usr/bin/hive --service hiveserver2 but hive command not print anything and hive server still stop what are the option to check why hive not start?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
11-01-2017
04:39 AM
ok I run it , and now we get - WARN AbstractLifeCycle: FAILED ServerConnector@14a54ef6{HTTP/1.1}{0.0.0.0:18081}: java.net.BindException: Address already in use
java.net.BindException: Address already in use
... View more
11-01-2017
04:18 AM
we start the spark history as the following
/usr/hdp/2.6.0.3-8/spark2/sbin/start-history-server.sh from the log
spark-root-org.apache.spark.deploy.history.HistoryServer-1-master01 we get many lines like that
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=READ, inode="/spark2-history/application_1497173286109_0003_2.inprogress":hdfs:hadoop:-rwxrwx---
we try
hdfs dfs -chown spark /spark2-history but not help
not sure if we need to see the folder ? # ls /spark2-history/
ls: cannot access /spark2-history/: No such file or directory
please advice what is solution in order to strat the spark history
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Spark
10-31-2017
08:00 PM
spark history server not started and from the logs under /var/log/spark2 we see the following 17/10/31 21:00:23 ERROR FsHistoryProvider: Exception encountered when attempting to load application log hdfs://hdfsha/spark2-history/application_1507958402099_0822_1
org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=READ, inode="/spark2-history/application_1507958402099_0822_1":bipflop:hadoop:-rwxrwx---
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1955)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1913)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2000)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1969)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1882)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:699)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:376)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345) please advice how to resolve this ? do you think that we need to do restore to postgresql from backup in order to solve this?
... View more
Labels:
10-30-2017
11:57 AM
yes the parameter is - log4j.appender.DRFA.MaxBackupIndex=30 , and we already restart the hive service , but still files under /var/log/hive not deleted , what are the other checked that we need to do here ? , and what is the frequency that proccess need to delete the files ?
... View more
10-30-2017
11:37 AM
we have ambari cluster with HIVE service the ambari configuration required to delete the files under /var/log/hive , if files with the same name are more then 30 from HIVE config hive_log_maxbackupindex=30 but when we access the master machines , we see more then 60 files with the same name example: cd /var/log/hive ls -ltr | grep hivemetastore | wc -l 61 ls -ltr | grep hiveserver2 | wc -l 61 we also remove the remark from the line log4j.appender.DRFA.MaxBackupIndex , and restart the hive service but this not help us please advice what could be the problem ? example of files under /var/log/hive -rw-r--r--. 1 hive hadoop 2752 Sep 2 19:05 hivemetastore-report.json
-rw-r--r--. 1 hive hadoop 2756 Sep 2 19:05 hiveserver2-report.json -rw-r--r--. 1 hive hadoop 636678 Sep 2 23:58 hiveserver2.log.2017-09-02
-rw-r--r--. 1 hive hadoop 1127874 Sep 2 23:59 hivemetastore.log.2017-09-02
-rw-r--r--. 1 hive hadoop 2369407 Sep 3 23:58 hiveserver2.log.2017-09-03 . . . .
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
10-26-2017
05:53 PM
I am little confuse in the value from yarn.nodemanager.resource.memory-mb is more then 120G and each worker machine only 32G , so how it can be that yarn.nodemanager.resource.memory-mb value much more the memory size of the worker machine ?
... View more