Member since
05-08-2018
5
Posts
0
Kudos Received
0
Solutions
06-06-2019
12:46 AM
Hi All,
We are facing a critical issie in one of our cloudera cluster. we are trying to connect and execute the hql files which are alter statements using beeline (embedded mode) command as like below and it is failing with the following errors.
[srvcacc@hostname ~]$ beeline -u jdbc:hive2://hostname.domain.dom:10000 --verbose=true --showWarnings=true
WARNING: Use "yarn jar" to launch YARN applications.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hive/lib/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show Log4j2 internal initialization logging.
Beeline version 2.1.1-cdh6.1.1 by Apache Hive
Default hs2 conection config file not found
0: jdbc:hive2://hostname.domain.dom:10000> show databases;
No current connection
0: jdbc:hive2://hostname.domain.dom:10000>
We have tried connection using the following ways in beeline, out of 2 ways 1 of them fails and other way works
1.
[srvcacc@hostname ~]$ beeline -u jdbc:hive2://hostname.domain.dom:10000
2.
[srvcacc@hostname ~]$ beeline
WARNING: Use "yarn jar" to launch YARN applications.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hive/lib/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show Log4j2 internal initialization logging.
Beeline version 2.1.1-cdh6.1.1 by Apache Hive
beeline> !connect jdbc:hive2://hostname.domain.dom:10000
Connecting to jdbc:hive2://hostname.domain.dom:10000
Enter username for jdbc:hive2://hostname.domain.dom:10000:
Enter password for jdbc:hive2://hostname.domain.dom:10000:
Connected to: Apache Hive (version 2.1.1-cdh6.1.1)
Driver: Hive JDBC (version 2.1.1-cdh6.1.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://hostname.domain.do>
the first way fails in connection with the following error
Default hs2 conection config file not found 0: jdbc:hive2://hostname.domain.dom:10000> show databases; No current connection
the second way connects without any error.
We tried the connection using the -d parameter with explicit mention of driver "org.apache.hive.jdbc.HiveDriver" and even this gives the same error as "Default hs2 connection config not found". We also attempted the connection with "Hive CLI" (Deprecated), that works without any issue. It is needed for us to use beeline with "-u" and "-f" parameter.
Cluster information:
5 Nodes (One master node and 4 data nodes) cluster with CDH Version 6.1.1 on RHEL 7.5
HiveServer2, Hive Metastore and webHcat server resides in same server (Master Node)
Connection to hive does not have any authentication mechanism.
We have verified "HiveServer2", "HiveMetastore" services and relevant ports and web UI ports everything works with out any issue.
During our initial setup of this cluster, due to the security policies with /tmp has "noexec" configuration, we had to change the below configurations with explicit mention of "'-Djava.io.tmpdir=/var/log/cloudera-scm-server/yarntemp" (where "/var/log/cloudera-scm-server" is separate mount point with 775 permissions)
YARN configuration
1. ApplicationMaster Java Opts Base
2. Java Configuration Options for JobHistory Server
3. Java Configuration Options for NodeManager
4. Java Configuration Options for ResourceManager
Cloudera Manager --> YARN --> search for: Gateway Client Environment Advanced Configuration Snippet (Safety Valve) for hadoop-env.sh and add this:
HADOOP_CLIENT_OPTS="-Djava.io.tmpdir=/var/log/cloudera-scm-server/yarntemp"
reference: https://community.cloudera.com/t5/Cloudera-Manager-Installation/Problem-starting-a-nodemanager/td-p/27658
Please let us know what need to be done for beeline to work with -u and -f parameter. Any help will appreciated.
Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache YARN
-
Cloudera Manager
08-09-2018
07:16 AM
Hi All, As part of our cloudera BDR backup & restore validation,we use the below commad to verify the back up and restored files are same. hdfs dfs -count /data before start the replication schedule . my /data directory in source cluster contains 6982 directories and 10,887 files. Please see the result of the hdfs count command [user@example ~]$ hdfs dfs -count /data 6982 10,887 11897305288 /data & [user@example~]$ hdfs dfs -ls -R /data | wc -l 17869 we had run replication(via distcp command line)maually, due to some space crunch on the remote server the distcp job was failed. then we run below command to check the hdfs count [user@example tmp]$ hdfs dfs -count /data 6982 21756 11940958360 /data [user@example tmp]$ hdfs dfs -ls -R /data | wc -l 17869 There was a devation in the file count before the operation,almost the file count increased double. However ls -R result giving the actual count (6982 +10,887). Ideally the output of hdfs dfs -count command should returns with 10,887 files and 6982 directories. What could be the reason for this inconsistent result? We did restart the cluster suspecting some chache but despite that the counts mentioned above was consitent. Thanks in advance, Kathik
... View more
Labels:
- Labels:
-
HDFS