Member since
07-27-2015
92
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3010 | 12-15-2019 07:05 PM |
08-08-2016
10:38 PM
HI could you help me to look into this issue. It let our jobs failed sometimes. Why the connection was closed sometimes? BR Paul
... View more
08-07-2016
07:43 PM
HI
We work with CDH 5.7 secure cluster.
we run hive2 action with oozie.
1. We find the below logs in hive server sometimes.
______________________
2016-08-06 00:09:06,778 ERROR org.apache.thrift.transport.TSaslTransport: [HiveServer2-Handler-Pool: Thread-52]: SASL negotiation failure javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password [Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: token expired or does not exist: owner=xxx, renewer=hive, realUser=hive/xxx.idc1.xx@XXX, issueDate=1470413336969, maxDate=1471018136969, sequenceNumber=9, masterKeyId=2] at com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:594) at com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java:244) at org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java:539) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:283) at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:765) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:762) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:762) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.security.token.SecretManager$InvalidToken: token expired or does not exist: owner=xxx, renewer=hive, realUser=hive/xxx.idc1.xx@XXX, issueDate=1470413336969, maxDate=1471018136969, sequenceNumber=9, masterKeyId=2 at org.apache.hadoop.hive.thrift.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java:114) at org.apache.hadoop.hive.thrift.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java:56) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.getPassword(HadoopThriftAuthBridge.java:588) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.handle(HadoopThriftAuthBridge.java:619) at com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:585) ... 15 more 2016-08-06 00:09:06,779 ERROR org.apache.thrift.server.TThreadPoolServer: [HiveServer2-Handler-Pool: Thread-52]: Error occurred during processing of message. java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: DIGEST-MD5: IO error acquiring password at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:765) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:762) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:762) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.thrift.transport.TTransportException: DIGEST-MD5: IO error acquiring password at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316) at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ... 10 more
______________________________________
2. I find hive2 action task sometimes failed and more time successed.
2.1 the success logs is below.
________________________________________
Connecting to jdbc:hive2://xxxx.idc1.xxx:10000/ Error: Could not open client transport with JDBC Uri: jdbc:hive2://xxxx.idc1.xxx:10000/: Peer indicated failure: DIGEST-MD5: IO error acquiring password (state=08S01,code=0) Connected to: Apache Hive (version 1.1.0-cdh5.7.1) Driver: Hive JDBC (version 1.1.0-cdh5.7.1) Transaction isolation: TRANSACTION_REPEATABLE_READ No rows affected (0.078 seconds) INFO : Compiling command(queryId=hive_20160808000909_06e0d60a-7dcd-485f-b177-f83aced6ee9b): use xxx
_____________________________________
2.2 the failed log is below, and it happend sometimes.
Connecting to jdbc:hive2://xxxx.idc1.xxx:10000/ Error: Could not open client transport with JDBC Uri: jdbc:hive2://xxxx.idc1.xxx:10000/: Peer indicated failure: DIGEST-MD5: IO error acquiring password (state=08S01,code=0) No current connection Connected to: Apache Hive (version 1.1.0-cdh5.7.1) Driver: Hive JDBC (version 1.1.0-cdh5.7.1) Transaction isolation: TRANSACTION_REPEATABLE_READ Closing: 0: jdbc:hive2://xxxx.idc1.xxx:10000/ Intercepting System.exit(2)
__________________________________________________
could you help me to resolve this issue that sometimes the hive2 action failed?
thanks in advance.
BR
Paul
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Oozie
07-28-2016
12:15 AM
HI, Harsh I changed the HDFS -> Configuration -> Trusted Realms So the issue gone. Great! Thanks for your excellent work. BR Paul
... View more
07-28-2016
12:02 AM
HI, Harsh Thank you a lot. Is there any other solution? Because the insecure cluster is our product enviorment. we can not reboot it. Can flume support the hftp or webhdfs potocal? there is our configruation. agent.sources = source1 agent.channels = channel1 channel2 agent.sinks = hdfs_sink1 hdfs_sink2 agent.sources.source1.selector.type = replicating agent.sources.source1.channels = channel1 channel2 agent.sources.source1.type = spooldir agent.sources.source1.spoolDir =/flumeDataTest agent.sources.source1.interceptors = i1 agent.sources.source1.interceptors.i1.type = timestamp agent.sources.source1.deserializer = LINE agent.sources.source1.deserializer.maxLineLength = 65535 agent.sources.source1.decodeErrorPolicy=IGNORE agent.channels.channel1.type = memory agent.channels.channel1.capacity = 10000 agent.channels.channel1.transactionCapacity=10000 agent.channels.channel2.type = memory agent.channels.channel2.capacity = 10000 agent.channels.channel2.transactionCapacity=10000 agent.sinks.hdfs_sink1.channel=channel1 agent.sinks.hdfs_sink1.type = hdfs agent.sinks.hdfs_sink1.hdfs.path = hdfs://arch-od-data01.beta1.fn:8020/user/kai.he/app_logs/%Y-%m-%d agent.sinks.hdfs_sink1.hdfs.fileType = DataStream agent.sinks.hdfs_sink1.hdfs.writeFormat = TEXT agent.sinks.hdfs_sink1.hdfs.useLocalTimeStamp=true agent.sinks.hdfs_sink1.hdfs.filePrefix=ev agent.sinks.hdfs_sink1.hdfs.inUsePrefix=. agent.sinks.hdfs_sink1.hdfs.request-timeout=30000 agent.sinks.hdfs_sink1.hdfs.rollCount = 6000 agent.sinks.hdfs_sink1.hdfs.rollInterval = 60 agent.sinks.hdfs_sink1.hdfs.rollSize=0 agent.sinks.hdfs_sink1.hdfs.kerberosKeytab=/tmp/kai.keytab agent.sinks.hdfs_sink1.hdfs.kerberosPrincipal=kai.he@OD.BETA agent.sinks.hdfs_sink2.channel=channel2 agent.sinks.hdfs_sink2.type = hdfs agent.sinks.hdfs_sink2.hdfs.path = hdfs://cache01.dev1.fn:8020/flume/app_logs/%Y-%m-%d agent.sinks.hdfs_sink2.hdfs.fileType = DataStream agent.sinks.hdfs_sink2.hdfs.writeFormat = TEXT agent.sinks.hdfs_sink2.hdfs.useLocalTimeStamp=true agent.sinks.hdfs_sink2.hdfs.filePrefix=f3 agent.sinks.hdfs_sink2.hdfs.inUsePrefix=. agent.sinks.hdfs_sink2.hdfs.request-timeout=30000 agent.sinks.hdfs_sink2.hdfs.rollCount = 6000 agent.sinks.hdfs_sink2.hdfs.rollInterval = 60 agent.sinks.hdfs_sink2.hdfs.rollSize=0 Thank you again. BR Paul
... View more
07-27-2016
11:38 PM
HI Very urgent! We are working with tow CDH5.7.1 cluster , one is the secure and another is the insecure cluster. We install flume agent service with secure cluster. 1. on the secure we run the command access insecure cluster. a. hdfs dfs -ls hdfs://cache01.dev1.fn:8020/flume/app_logs/ ls: End of File Exception between local host is: "arch-od-tracker04.beta1.fn/10.202.251.14"; destination host is: "cache01.dev1.fn":8020; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException b. hdfs dfs -ls webhdfs://cache01.dev1.fn:50070/flume/app_logs/ Found 1 items drwxrwxrwx - flume supergroup 0 2016-07-28 13:15 webhdfs://cache01.dev1.fn:50070/flume/app_logs/2016-07-28 So we run the flume. 1. -----> insecure cluster (fail) with exception of a. java.io.EOFException; secure cluster -----> secure cluster self (success) 2. -----> insecure cluster (success) secure cluster -----> insecure cluster (success) 3. -----> secure cluster (success) secure cluster ----->secure cluster (success) Now, my question is how to config the flume agent to let 1 to work fine. Thanks in advance. BR Paul
... View more
Labels:
- Labels:
-
HDFS
09-22-2015
12:44 AM
Hi Wilfred I have installed Spark Gateway, and yarn was already be installed with oozie. Unfortunately, I run the shell: spark-submit --class com.cloudera.sparkwordcount.SparkWordCount --master yarn target/sparkwordcount-0.0.1-SNAPSHOT.jar /user/paul 2 got the error: Exception in thread "Driver" scala.MatchError: java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/JobConf(of class java.lang.NoClassDefFoundError) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:432) How to resolve the issue? Thanks in advance. Paul
... View more
09-21-2015
02:49 AM
HI Wilfred Could you give me an example? Thanks in advance. Paul
... View more
08-25-2015
08:16 PM
HI Wilfred, I will to try follow your suggestion Thanks Paul
... View more
08-24-2015
08:00 PM
HI As you know, there is not supporting spark with 4.0.0-cdh5.3.2 oozie in cdh5.3.2. But, we would like to get the function of workflow support. How to resolve the issue in our cdh5.3.2 environment? Thanks Paul
... View more
Labels:
- Labels:
-
Apache Oozie
07-28-2015
01:32 AM
And I would like to make sure that support namespace when rename the table by snapshot. Thanks
... View more
- « Previous
- Next »