Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
11375 | 03-08-2019 06:33 PM | |
4847 | 02-15-2019 08:47 PM | |
4146 | 09-26-2018 06:02 PM | |
10526 | 09-07-2018 10:33 PM | |
5580 | 04-25-2018 01:55 AM |
04-08-2016
08:13 AM
@grajagopal - If you get your answer then please click on accept on the answer given above to close this thread 🙂
... View more
04-08-2016
08:11 AM
10 Kudos
Generally you will get permission denied error as below when you run oozie shell action without setting environment variable to username who is submitting the oozie workflow. Permission denied: user=yarn, access=WRITE oozie shell action Below are my sample configs workflow.xml <workflow-app xmlns="uri:oozie:workflow:0.3" name="shell-wf">
<start to="shell-node"/>
<action name="shell-node">
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>test.sh</exec>
<file>/user/root/test.sh</file>
</shell>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app> job.properties nameNode=hdfs://sandbox.hortonworks.com:8020
jobTracker=sandbox.hortonworks.com:8050
queueName=default
examplesRoot=examples
oozie.wf.application.path=${nameNode}/user/${user.name} test.sh shell script #!/bin/bash
hadoop fs -mkdir /user/root/testdir Error: mkdir: Permission denied: user=yarn, access=WRITE, inode="/user/root/testdir":root:hdfs:drwxr-xr-x
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.impl.MetricsSystemImpl).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
How to fix this? To tell oozie not to run container as yarn but as the user who has submitted the workflow, we need to add below variable environment variable in our workflow.xml file <env-var>HADOOP_USER_NAME=${wf:user()}</env-var> Modified working workflow.xml file: <workflow-app xmlns="uri:oozie:workflow:0.3" name="shell-wf">
<start to="shell-node"/>
<action name="shell-node">
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>test.sh</exec>
<env-var>HADOOP_USER_NAME=${wf:user()}</env-var>
<file>/user/root/test.sh</file>
</shell>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app> Ouptut: [root@sandbox ~]# hadoop fs -ls -d /user/root/testdir
drwxr-xr-x - root hdfs 0 2016-04-08 07:59 /user/root/testdir
[root@sandbox ~]#
... View more
Labels:
04-07-2016
08:50 PM
9 Kudos
As a Hadoop Admin it’s our responsibility to perform Hadoop Cluster Maintenance frequently. Let’s see what we can do to keep our big elephant happy! 😉 1. FileSystem Checks We should check health of HDFS periodically by running fsck command sudo -u hdfs hadoop fsck / This command contacts the Namenode and checks each file recursively which comes under the provided path. Below is the sample output of fsck command sudo -u hdfs hadoop fsck /
FSCK started by hdfs (auth:SIMPLE) from /10.0.2.15 for path / at Wed Apr 06 18:47:37 UTC 2016
Total size: 1842803118 B
Total dirs: 4612
Total files: 11123
Total symlinks: 0 (Files currently being written: 4)
Total blocks (validated): 11109 (avg. block size 165883 B) (Total open file blocks (not validated): 1)
Minimally replicated blocks: 11109 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 11109 (100.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 1.0
Corrupt blocks: 0
Missing replicas: 22232 (66.680664 %)
Number of data-nodes: 1
Number of racks: 1
FSCK ended at Wed Apr 06 18:46:54 UTC 2016 in 1126 milliseconds
The filesystem under path '/' is HEALTHY We can schedule a weekly cron job on edge node which will run fsck and send the output via email to Hadoop Admin. 2. HDFS Balancer utility Over the period of time data becomes un-balanced across all the Datanodes in the cluster, this could be because of maintenance activity on specific Datanode, power failure, hardware failures, kernel panic, unexpected reboots etc. In this case because of data locality, Datanodes which are having more data will get churned and ultimately un-balanced cluster can directly affect your MapReduce job performance. You can use below command to run hdfs balancer sudo -u hdfs hdfs balancer -threshold <threshold-value> By default threshold value is 10, we can reduce it upto 1 ( It’s better to run balancer with lowest threshold ) Sample output: [root@sandbox ~]# sudo -u hdfs hdfs balancer -threshold 1
16/04/06 18:57:16 INFO balancer.Balancer: Using a threshold of 1.0
16/04/06 18:57:16 INFO balancer.Balancer: namenodes = [hdfs://sandbox.hortonworks.com:8020]
16/04/06 18:57:16 INFO balancer.Balancer: parameters = Balancer.Parameters [BalancingPolicy.Node, threshold = 1.0, max idle iteration = 5, #excluded nodes = 0, #included nodes = 0, #source nodes = 0, run during upgrade = false]
16/04/06 18:57:16 INFO balancer.Balancer: included nodes = []
16/04/06 18:57:16 INFO balancer.Balancer: excluded nodes = []
16/04/06 18:57:16 INFO balancer.Balancer: source nodes = []
Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
16/04/06 18:57:17 INFO balancer.KeyManager: Block token params received from NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
16/04/06 18:57:17 INFO block.BlockTokenSecretManager: Setting block keys
16/04/06 18:57:17 INFO balancer.KeyManager: Update block keys every 2hrs, 30mins, 0sec
16/04/06 18:57:17 INFO balancer.Balancer: dfs.balancer.movedWinWidth = 5400000 (default=5400000)
16/04/06 18:57:17 INFO balancer.Balancer: dfs.balancer.moverThreads = 1000 (default=1000)
16/04/06 18:57:17 INFO balancer.Balancer: dfs.balancer.dispatcherThreads = 200 (default=200)
16/04/06 18:57:17 INFO balancer.Balancer: dfs.datanode.balance.max.concurrent.moves = 5 (default=5)
16/04/06 18:57:17 INFO balancer.Balancer: dfs.balancer.getBlocks.size = 2147483648 (default=2147483648)
16/04/06 18:57:17 INFO balancer.Balancer: dfs.balancer.getBlocks.min-block-size = 10485760 (default=10485760)
16/04/06 18:57:17 INFO block.BlockTokenSecretManager: Setting block keys
16/04/06 18:57:17 INFO balancer.Balancer: dfs.balancer.max-size-to-move = 10737418240 (default=10737418240)
16/04/06 18:57:17 INFO balancer.Balancer: dfs.blocksize = 134217728 (default=134217728)
16/04/06 18:57:17 INFO net.NetworkTopology: Adding a new node: /default-rack/10.0.2.15:50010
16/04/06 18:57:17 INFO balancer.Balancer: 0 over-utilized: []
16/04/06 18:57:17 INFO balancer.Balancer: 0 underutilized: []
The cluster is balanced. Exiting...
Apr 6, 2016 6:57:17 PM 0 0 B 0 B -1 B
Apr 6, 2016 6:57:17 PM Balancing took 1.383 seconds We can schedule a weekly cron job on edge node which will run balancer and send the results via email to Hadoop Admin. 3. Adding new nodes to the cluster We should always maintain the list of Datanodes which are authorized to communicate with Namenode, it can be achieved by setting dfs.hosts property in hdfs-site.xml <property>
<name>dfs.hosts</name>
<value>/etc/hadoop/conf/allowed-datanodes.txt</value>
</property> If we don’t set this property then any machine which has Datanode installed and hdfs-site.xml property file can easily contact Namenode and become part of Hadoop cluster. 3.1 For Nodemanagers We can add below property in yarn-site.xml <property>
<name>yarn.resourcemanager.nodes.include-path</name>
<value>/etc/hadoop/conf/allowed-nodemanagers.txt</value>
</property> 4. Decommissioning a node from the cluster It’s a bad idea to stop single or multiple Datanode daemons or shutdown them gracefully though HDFS is fault tolerant. Better solution is to add ip address of Datanode machine that we need to remove from cluster to exclude file which is maintained by dfs.hosts.exclude property and run below command sudo -u hdfs hdfs dfsadmin -refreshNodes After this, Namenode will start replicating all the blocks to other existing Datanodes in the cluster, once decommission process is complete then it’s safe to shutdown Datanode daemon. You can track progress of decommission process on NN Web UI. 4.1 For YARN: Add ip address of node manager machine to the file maintained by yarn.resourcemanager.nodes.exclude-path property and run below command. sudo -u yarn yarn rmadmin -refreshNodes 5. Datanode Volume Failures Namenode WebUI shows information about Datanode volume failures, we should check this information periodically or set some kind of automated monitoring system using Nagios or Ambari Metrics if you are using Hortonworks Hadoop Distribution or JMX monitoring (http://<namenode-host>:50070/jmx) etc. Multiple disk failures on single Datanode could cause shutdown of Datanode daemon. ( Please check dfs.datanode.failed.volumes.tolerated property and set it accordingly in hdfs-site.xml ) 6. Database Backups If we you have multiple Hadoop ecosystem components installed then you should schedule a backup script to take database dumps. for e.g. 1. Hive metastore database 2. Oozie-DB 3. Ambari DB 4. Ranger DB Create a simple shell script to have backup commands and schedule it on a weekend, add a logic to send an email once backups are done. 7. HDFS Metadata backup fsimage has metadata about your Hadoop file system and if for some reason it gets corrupted then your cluster is un-usable, it’s very important to keep periodic backups of filesystem fsimage. You can schedule a shell script which will have below command to take backup of fsimage hdfs dfsadmin -fetchImage fsimage.backup.ddmmyyyy 8. Purging older log files In production clusters, if we don’t clean older Hadoop log files then it can eat your entire disk and daemons could crash because of “no space left on device” error. Always get older log files cleaned via cleanup script! Please comment if you have any feedback/questions/suggestions. Happy Hadooping!! 🙂
... View more
Labels:
04-07-2016
08:26 PM
5 Kudos
@grajagopal I think it does. I'm not a security expert but when I configured SSL for hiveserver2, I could see below line in hiveserver2 logs 2016-04-07 13:24:45,538 INFO [Thread-10]: auth.HiveAuthFactory (HiveAuthFactory.java:getServerSSLSocket(274)) - SSL Server Socket Enabled Protocols: [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2]
... View more
04-07-2016
11:38 AM
2 Kudos
@Michael Aube Please try below command on your sandbox scp <username>@<ip-of-mac>:$path/mydatafile.csv $local_path_on_sandbox
... View more
04-06-2016
02:40 PM
@Ancil McBarnett - Perfecto!! this works. Thanks a ton 🙂
... View more
04-06-2016
02:21 PM
4 Kudos
I have HDP-2.3.4.0 Kerberized cluster and I have enabled SSL for hiveserver2 using this documentation link hiveserver2 daemon is running fine however I'm unable to connect to hiveserver2 using beeline. I have valid kerberos ticket [vagrant@ambari-slave1 ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_500
Default principal: vagrant@SUPPORT.COM
Valid starting Expires Service principal
04/06/16 11:08:20 04/07/16 11:08:19 krbtgt/SUPPORT.COM@SUPPORT.COM
renew until 04/06/16 11:08:20
[vagrant@ambari-slave1 ~]$ date
Wed Apr 6 13:53:26 UTC 2016
[vagrant@ambari-slave1 ~]$ I tried below commands however none of them is working Command 1: !connect jdbc:hive2://ambari-slave1.support.com:10000/;ssl=true;sslTrustStore=/etc/hive/conf/hive.jks;trustStorePassword=password; Error: Error: Could not open client transport with JDBC Uri: jdbc:hive2://ambari-slave1.support.com:10000/;ssl=true;sslTrustStore=/etc/hive/conf/hive.jks;trustStorePassword=password;: Peer indicated failure: Unsupported mechanism type PLAIN (state=08S01,code=0) Error in hiveserver2 logs: ERROR [HiveServer2-Handler-Pool: Thread-44]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:178)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
Caused by: javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at sun.security.ssl.InputRecord.handleUnknownRecord(InputRecord.java:710)
at sun.security.ssl.InputRecord.read(InputRecord.java:527)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:928)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
... 16 more Command 2 with error: beeline> !connect jdbc:hive2://ambari-slave1.support.com:10000/default;ssl=true;sslTrustStore=/etc/hive/conf/hive.jks;trustStorePassword=password;principal=hive/ambari-slave1.support.com@SUPPORT.COM;
Connecting to jdbc:hive2://ambari-slave1.support.com:10000/default;ssl=true;sslTrustStore=/etc/hive/conf/hive.jks;trustStorePassword=password;principal=hive/ambari-slave1.support.com@SUPPORT.COM;
Enter username for jdbc:hive2://ambari-slave1.support.com:10000/default;ssl=true;sslTrustStore=/etc/hive/conf/hive.jks;trustStorePassword=password;principal=hive/ambari-slave1.support.com@SUPPORT.COM;:
Enter password for jdbc:hive2://ambari-slave1.support.com:10000/default;ssl=true;sslTrustStore=/etc/hive/conf/hive.jks;trustStorePassword=password;principal=hive/ambari-slave1.support.com@SUPPORT.COM;:
16/04/06 13:57:05 [main]: WARN transport.TSaslTransport: Could not send failure response
org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:161)
at org.apache.thrift.transport.TSaslTransport.sendSaslMessage(TSaslTransport.java:166)
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:227)
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:277)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:185)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:156)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:142)
at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:207)
at org.apache.hive.beeline.Commands.connect(Commands.java:1149)
at org.apache.hive.beeline.Commands.connect(Commands.java:1070)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:980)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:823)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:781)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:485)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:468)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:159)
... 36 more
Error: Could not open client transport with JDBC Uri: jdbc:hive2://ambari-slave1.support.com:10000/default;ssl=true;sslTrustStore=/etc/hive/conf/hive.jks;trustStorePassword=password;principal=hive/ambari-slave1.support.com@SUPPORT.COM;: Invalid status 21
Also, could not send response: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe (state=08S01,code=0) Note - Same error in hiveserver2 logs as given above. Command 3(with auth=noSasl):
!connect jdbc:hive2://ambari-slave1.support.com:10000/default;auth=noSasl;ssl=true;sslTrustStore=/etc/hive/conf/hive.jks;trustStorePassword=password;principal=hive/ambari-slave1.support.com@SUPPORT.COM;
Connecting to jdbc:hive2://ambari-slave1.support.com:10000/default;auth=noSasl;ssl=true;sslTrustStore=/etc/hive/conf/hive.jks;trustStorePassword=password;principal=hive/ambari-slave1.support.com@SUPPORT.COM;
Enter username for jdbc:hive2://ambari-slave1.support.com:10000/default;auth=noSasl;ssl=true;sslTrustStore=/etc/hive/conf/hive.jks;trustStorePassword=password;principal=hive/ambari-slave1.support.com@SUPPORT.COM;:
Enter password for jdbc:hive2://ambari-slave1.support.com:10000/default;auth=noSasl;ssl=true;sslTrustStore=/etc/hive/conf/hive.jks;trustStorePassword=password;principal=hive/ambari-slave1.support.com@SUPPORT.COM;:
16/04/06 13:59:54 [main]: ERROR jdbc.HiveConnection: Error opening session
org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:380)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:230)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:156)
at org.apache.hive.service.cli.thrift.TCLIService$Client.OpenSession(TCLIService.java:143)
at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:562)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:171)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:142)
at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:207)
at org.apache.hive.beeline.Commands.connect(Commands.java:1149)
at org.apache.hive.beeline.Commands.connect(Commands.java:1070)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:980)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:823)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:781)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:485)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:468)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Error: Could not establish connection to jdbc:hive2://ambari-slave1.support.com:10000/default;auth=noSasl;ssl=true;sslTrustStore=/etc/hive/conf/hive.jks;trustStorePassword=password;principal=hive/ambari-slave1.support.com@SUPPORT.COM;: null (state=08S01,code=0) Note - Same error in hiveserver2 logs as given above. When I tried SSL on hiveserver2 without Kerberos on same setup, its working fine without any issue. Hive/Security experts - Please help! 🙂
... View more
Labels:
- Labels:
-
Apache Hive
04-05-2016
05:44 PM
1 Kudo
@Sushil Saxena - Are you having duplicates jars (different version) in oozie sharelib for hive? can you please check that. hadoop fs -ls /user/oozie/share/lib/lib_<date>/hive/
... View more
04-05-2016
05:39 AM
1 Kudo
@Aaron Dossett - Okay if that is the case, can you please try removing conflicting jar file(s) from oozie sharelib and see if that helps. If you want to revert the changes you can just re-create oozie sharelib by using below command Note - Please run below commands on oozie host by oozie user. /usr/hdp/2.3.2.0-2950/oozie/bin/oozie-setup.sh sharelib create -locallib /usr/hdp/<version>/oozie/oozie-sharelib.tar.gz -fs hdfs://<namenode-host>:8020 oozie admin -oozie http://<oozie-host>:11000/oozie -sharelibupdate
... View more
04-04-2016
06:43 PM
6 Kudos
@rbalam - It could be possible that both the resource managers are in standby or active state. can you please run below commands and check? sudo -u yarn yarn rmadmin -getServiceState rm1
sudo -u yarn yarn rmadmin -getServiceState rm2 If you find that both the RMs are in standby state then you can initiate manual failover using below command sudo -u yarn yarn rmadmin -transitionToActive --forcemanual rm1
... View more