Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1914 | 06-15-2020 05:23 AM | |
| 15434 | 01-30-2020 08:04 PM | |
| 2055 | 07-07-2019 09:06 PM | |
| 8094 | 01-27-2018 10:17 PM | |
| 4561 | 12-31-2017 10:12 PM |
11-05-2017
10:54 AM
@uri ben-ari Yes, We can kill if other users are not using HiveServer2 (just to be sure that they are not running any job) # cat /var/run/hive/hive-server.pid
# ps -ef | grep `cat /var/run/hive/hive-server.pid`
# netstat -tnlpa | grep `cat /var/run/hve/hive-server.pid`
# kill -9 `cat /var/run/hive/hive-server.pid` . Above commands like cat & ps are to confirm if we are killing the correct process.
... View more
11-01-2017
10:40 AM
we get many error like EmulatedXAResource@64deb58f, error code TMNOFLAGS and transaction: [DataNucleus Transaction, ID=Xid=#, enlisted resources=[]]
2017-11-01 10:35:20,363 DEBUG [main]: DataNucleus.Transaction (Log4JLogger.java:debug(58)) - Running enlist operation on resource: org.datanucleus.store.rdbms.ConnectionFactoryImpl$EmulatedXAResource@64deb58f, error code TMNOFLAGS and transaction: [DataNucleus Transaction, ID=Xid=#, enlisted resources=[]]
javax.jdo.JDODataStoreException: Error executing SQL query "select "DB_ID" from "DBS"".
java.sql.SQLSyntaxErrorException: Table/View 'DBS' does not exist.
Caused by: ERROR 42X05: Table/View 'DBS' does not exist. java.lang.RuntimeException: Error applying authorization policy on hive configuration: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
... View more
12-03-2017
05:05 AM
@Michael Bronson, This looks like a permission issue. The /spark2-history should belong to the spark user. You can change it as below hdfs dfs -chown spark /spark2-history
hdfs dfs -chown spark /spark-history Thanks, Aditya
... View more
10-30-2017
11:57 AM
yes the parameter is - log4j.appender.DRFA.MaxBackupIndex=30 , and we already restart the hive service , but still files under /var/log/hive not deleted , what are the other checked that we need to do here ? , and what is the frequency that proccess need to delete the files ?
... View more
10-26-2017
06:11 PM
@uri ben-ari, This value is calculated using the stack advisor 'yarn.nodemanager.resource.memory-mb' = int(round(min(clusterData['containers']* clusterData['ramPerContainer'], nodemanagerMinRam))))
... View more
03-29-2018
03:02 AM
cheers @Jay Kumar SenSharma , that solved my other issue
... View more
10-25-2017
07:31 PM
I belive you need to figure why multiple Spark apps are running. If this is not a production cluster, and no one is going to get affected out of restarting SPARK, you can look into that option. But this just makes me to believe that the configuration settings for SPARK on how many SPARK apps are supposed to run is most probably the difference between two of your clusters. I am not an expert in SPARK to point you to the correct config to look for.
... View more
10-25-2017
09:58 AM
Jay not in this issue , but I will happy to get tour answer about my quastion from - https://community.hortonworks.com/questions/142356/how-to-recover-the-standby-name-node-in-ambari-clu.html
... View more
10-25-2017
07:43 PM
@uri ben-ari You can use Ambari API to delete services from the host, then delete the host : https://cwiki.apache.org/confluence/display/AMBARI/Using+APIs+to+delete+a+service+or+all+host+components+on+a+host
... View more
10-23-2017
05:30 PM
@uri ben-ari This looks like a mount issue or due to some reasons the data.dir is not readable/accessible. Can you please check from the OS side if there any disk/mount issue? Please check for Bad Mounts. A system admin can help better in mount/disk related issues. May be looking at the "/var/log/messages" can give some idea. Also can you please check the "dfs.datanode.data.dir" directory to findout if there is any issue. Please share value of this property and also check the permission. .
... View more