Member since
06-13-2016
17
Posts
2
Kudos Received
0
Solutions
09-17-2018
01:20 PM
Hi, HDP-2.6.1.0 cluster running on CentOS 7.5.1804. I get the following log file growing under /tmp on the Knox server. The full path is /tmp/username/hive.log (where username is the one accessing Hive). The sample content of hive.log is provided below. Although the log file is rotated I'd like to move it to /var/log, so I was wondering which section of Hive(?) configuration is responsible for that log file?
I see multiple traces of hive.log.dir and hive.log.file but I'm not sure which one is relevant. Many thanks. 2018-09-12 13:07:08,943 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(177)) - </PERFLOG method=TezRunVertex.Reducer 2 start=1536750381000 end=1536750428943 duration=47943 from=org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor>
2018-09-12 13:07:08,944 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(177)) - </PERFLOG method=TezRunVertex.Reducer 5 start=1536750368334 end=1536750428944 duration=60610 from=org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor>
2018-09-12 13:07:08,944 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(177)) - </PERFLOG method=TezRunVertex.Reducer 6 start=1536750419768 end=1536750428944 duration=9176 from=org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor>
2018-09-12 13:07:08,944 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(149)) - <PERFLOG method=TezRunVertex.Reducer 7 from=org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor>
2018-09-12 13:07:08,944 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(177)) - </PERFLOG method=TezRunVertex.Reducer 7 start=1536750428944 end=1536750428944 duration=0 from=org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor>
2018-09-12 13:07:09,008 INFO [main]: monitoring.TezJobMonitor$UpdateFunction (TezJobMonitor.java:update(137)) - Map 1: 10/10 Map 3: 29/29 Map 4: 10/10 Map 8: 29/29 Reducer 2: 1/1 Reducer 5: 1/1 Reducer 6: 1/1 Reducer 7: 1/1
2018-09-12 13:07:09,009 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(177)) - </PERFLOG method=TezRunDag start=1536750115791 end=1536750429009 duration=313218 from=org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor>
2018-09-12 13:07:09,106 INFO [main]: counters.Limits (Limits.java:ensureInitialized(60)) - Counter limits initialized with parameters: GROUP_NAME_MAX=256, MAX_GROUPS=3000, COUNTER_NAME_MAX=64, MAX_COUNTERS=10000
2018-09-12 13:07:09,186 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(149)) - <PERFLOG method=RemoveTempOrDuplicateFiles from=FileSinkOperator>
2018-09-12 13:07:09,193 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(177)) - </PERFLOG method=RemoveTempOrDuplicateFiles start=1536750429186 end=1536750429193 duration=7 from=FileSinkOperator>
2018-09-12 13:07:09,194 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(149)) - <PERFLOG method=RenameOrMoveFiles from=FileSinkOperator>
2018-09-12 13:07:09,194 INFO [main]: exec.FileSinkOperator (Utilities.java:mvFileToFinalPath(2026)) - Moving tmp dir: hdfs://HADOOP/tmp/hive/svc-feed/ac4bf6dd-f7be-4ded-b972-01ea88f8fe6b/hive_2018-09-12_13-01-35_168_1425040625106206929-1/-mr-10001/.hive-staging_hive_2018-09-12_13-01-35_168_1425040625106206929-1/_tmp.-ext-10002 to: hdfs://HADOOP/tmp/hive/svc-feed/ac4bf6dd-f7be-4ded-b972-01ea88f8fe6b/hive_2018-09-12_13-01-35_168_1425040625106206929-1/-mr-10001/.hive-staging_hive_2018-09-12_13-01-35_168_1425040625106206929-1/-ext-10002
2018-09-12 13:07:09,236 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(177)) - </PERFLOG method=RenameOrMoveFiles start=1536750429194 end=1536750429236 duration=42 from=FileSinkOperator>
2018-09-12 13:07:09,265 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(177)) - </PERFLOG method=runTasks start=1536750107335 end=1536750429265 duration=321930 from=org.apache.hadoop.hive.ql.Driver>
2018-09-12 13:07:09,266 INFO [main]: hooks.ATSHook (ATSHook.java:<init>(114)) - Created ATS Hook
2018-09-12 13:07:09,266 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(149)) - <PERFLOG method=PostHook.org.apache.hadoop.hive.ql.hooks.ATSHook from=org.apache.hadoop.hive.ql.Driver>
2018-09-12 13:07:09,268 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(177)) - </PERFLOG method=PostHook.org.apache.hadoop.hive.ql.hooks.ATSHook start=1536750429266 end=1536750429268 duration=2 from=org.apache.hadoop.hive.ql.Driver>
2018-09-12 13:07:09,268 INFO [main]: ql.Driver (Driver.java:execute(1638)) - Resetting the caller context to
2018-09-12 13:07:09,269 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(177)) - </PERFLOG method=Driver.execute start=1536750106856 end=1536750429269 duration=322413 from=org.apache.hadoop.hive.ql.Driver>
2018-09-12 13:07:09,270 INFO [main]: ql.Driver (SessionState.java:printInfo(984)) - OK
2018-09-12 13:07:09,270 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(149)) - <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2018-09-12 13:07:09,270 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(177)) - </PERFLOG method=releaseLocks start=1536750429270 end=1536750429270 duration=0 from=org.apache.hadoop.hive.ql.Driver>
2018-09-12 13:07:09,270 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(177)) - </PERFLOG method=Driver.run start=1536750095097 end=1536750429270 duration=334173 from=org.apache.hadoop.hive.ql.Driver>
2018-09-12 13:07:09,272 INFO [ATS Logger 0]: hooks.ATSHook (ATSHook.java:createPostHookEvent(362)) - Received post-hook notification for :svc-feed_20180912130135_9b82b03b-0581-4976-88a6-b2a58ff9251a
2018-09-12 13:07:09,302 INFO [main]: exec.ListSinkOperator (Operator.java:close(616)) - Closing operator OP[60]
2018-09-12 13:07:09,368 INFO [main]: CliDriver (SessionState.java:printInfo(984)) - Time taken: 334.184 seconds
2018-09-12 13:07:09,369 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(149)) - <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2018-09-12 13:07:09,369 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(177)) - </PERFLOG method=releaseLocks start=1536750429369 end=1536750429369 duration=0 from=org.apache.hadoop.hive.ql.Driver>
2018-09-12 13:07:09,382 INFO [main]: tez.TezSessionPoolManager (TezSessionPoolManager.java:close(183)) - Closing tez session default? false
2018-09-12 13:07:09,382 INFO [main]: tez.TezSessionState (TezSessionState.java:close(293)) - Closing Tez Session
2018-09-12 13:07:09,383 INFO [main]: client.TezClient (TezClient.java:stop(518)) - Shutting down Tez Session, sessionName=HIVE-ac4bf6dd-f7be-4ded-b972-01ea88f8fe6b, applicationId=application_1503845958062_15700
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Knox
08-27-2017
10:29 AM
Thank you @Sandeep More That seems to be the one! We're running latest HDP (2.6.1) which comes with Knox 0.12.0. Going to try to convince support to release 0.13.0 RPMs.
... View more
08-25-2017
09:54 AM
Hi, HDP-2.6.1.0-129. Are there any restrictions with regards to special characters used in HBase rows called through Knox via HBase Rest? We're having issues with # sign: hbase(main):020:0> scan 'hbaseexample'
k#0 column=columns:_ca1, timestamp=1503650932806, value=test
1 row(s) in 0.0200 seconds When we try to access it via Knox (where k#0 is encoded into k%230) by calling https://srv-knx01:8443/gateway/default/hbase/hbaseexample/k%230 we get 404 Not Found. Adding two # signs throws an exception: hbase(main):020:0> scan 'hbaseexample'
k#0 column=columns:_ca1, timestamp=1503650932806, value=test
k#0# column=columns:_ca1, timestamp=1503650932806, value=test
2 row(s) in 0.0200 seconds https://srv-knx01:8443/gateway/default/hbase/hbaseexample/k%230%23 Caused by: java.lang.IllegalArgumentException: Illegal character in fragment at index 57: http://srv-namenode01:60080/hbaseexample/k#0#?doAs=feeder There were no issues with 2.4. Any hints would be greatly appreciated.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Knox
-
Apache Ranger
01-11-2017
03:24 PM
Hi, Say I have HBase tables with names starting with poc (namespacing is not used). Is there any way to grant rights to all tables that start with poc using Ranger? I tried to specify poc* in the HBase Table section of the Policy, but users get Insufficient Permissions errors while accessing poc tables using hbase shell. Explicitly specifying each poc table works fine. HDP-2.4.0.0-169 (HBase 1.1.2.2.4, Ranger 0.5.0.2.4). Thanks.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Ranger
08-29-2016
03:32 PM
1 Kudo
Thank you very much @Santhosh B Gowda -- that was it!
... View more
08-29-2016
10:35 AM
@Santhosh B Gowda Seems to be there: Authenticating as principal root/admin@HADOOP.LOCAL with password.
Principal: yarn/hdp-nn01.local.net@HADOOP.LOCAL
Expiration date: [never]
Last password change: Fri Jul 08 14:12:54 CEST 2016
Password expiration date: [none]
Maximum ticket life: 1 day 00:00:00
Maximum renewable life: 0 days 00:00:00
Last modified: Fri Jul 08 14:12:54 CEST 2016 (hdp-svc/admin@HADOOP.LOCAL)
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 4
Key: vno 2, aes256-cts-hmac-sha1-96
Key: vno 2, aes128-cts-hmac-sha1-96
Key: vno 2, des3-cbc-sha1
Key: vno 2, arcfour-hmac
MKey: vno 1
Attributes:
Policy: [none]
... View more
08-29-2016
09:42 AM
@Santhosh B Gowda Thank you Santhosh. It seems that it expired? $ klist -kt /etc/security/keytabs/yarn.service.keytab
Keytab name: FILE:/etc/security/keytabs/yarn.service.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
1 04/27/2016 15:56:20 yarn/hdp-nn01.local.net@HADOOP.LOCAL
1 04/27/2016 15:56:20 yarn/hdp-nn01.local.net@HADOOP.LOCAL
1 04/27/2016 15:56:20 yarn/hdp-nn01.local.net@HADOOP.LOCAL
1 04/27/2016 15:56:20 yarn/hdp-nn01.local.net@HADOOP.LOCAL
1 04/27/2016 15:56:20 yarn/hdp-nn01.local.net@HADOOP.LOCAL
Executing kinit -kt /etc/security/keytabs/yarn.service.keytab yarn/hdp-nn01.local.net@HADOOP.LOCAL gives me kinit: Password incorrect while getting initial credentials but I can't recall setting up the password. Thanks.
... View more
08-29-2016
08:25 AM
Hi, I'm having problems starting Yarn App Timeline Server (HDP-2.4.0.0-169 kerberized cluster with Ambari 2.2.2.0). Everything was working fine for several months until we had to reallocate servers to a different data center therefore the cluster had to be shut down.
I'm able to start Active and Standby ResourceManagers (along with all NodeManagers), but App Timeline Server fails with the following in the logs: 2016-08-28 18:21:51,903 FATAL applicationhistoryservice.ApplicationHistoryServer (ApplicationHistoryServer.java:launchAppHistoryServer(171)) - Error starting ApplicationHistoryServer
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to login
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.serviceStart(ApplicationHistoryServer.java:112)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.launchAppHistoryServer(ApplicationHistoryServer.java:169)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.main(ApplicationHistoryServer.java:178)
Caused by: java.io.IOException: Login failure for yarn/hdp-nn01.local.net@HADOOP.LOCAL from keytab /etc/security/keytabs/yarn.service.keytab: javax.security.auth.login.LoginException: Checksum failed
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:962)
at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:275)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.doSecureLogin(ApplicationHistoryServer.java:335)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.serviceStart(ApplicationHistoryServer.java:110)
... 3 more
Caused by: javax.security.auth.login.LoginException: Checksum failed
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:804)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:953)
... 6 more
Caused by: KrbException: Checksum failed
at sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType.decrypt(Aes256CtsHmacSha1EType.java:102)
at sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType.decrypt(Aes256CtsHmacSha1EType.java:94)
at sun.security.krb5.EncryptedData.decrypt(EncryptedData.java:175)
at sun.security.krb5.KrbAsRep.decrypt(KrbAsRep.java:149)
at sun.security.krb5.KrbAsRep.decryptUsingKeyTab(KrbAsRep.java:121)
at sun.security.krb5.KrbAsReqBuilder.resolve(KrbAsReqBuilder.java:285)
at sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:361)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:776)
... 19 more
Caused by: java.security.GeneralSecurityException: Checksum failed
at sun.security.krb5.internal.crypto.dk.AesDkCrypto.decryptCTS(AesDkCrypto.java:451)
at sun.security.krb5.internal.crypto.dk.AesDkCrypto.decrypt(AesDkCrypto.java:272)
at sun.security.krb5.internal.crypto.Aes256.decrypt(Aes256.java:76)
at sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType.decrypt(Aes256CtsHmacSha1EType.java:100)
... 26 more
2016-08-28 18:21:51,904 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status -1
2016-08-28 18:21:51,906 INFO applicationhistoryservice.ApplicationHistoryServer (LogAdapter.java:info(45)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down ApplicationHistoryServer at hdp-nn01.local.net/192.168.12.73
************************************************************/ yarn.service.keytab is present on hdp-nn01.local.net, and krb5.conf seem to be the intact. Any assistance would be greatly appreciated. Thanks.
... View more
Labels:
- Labels:
-
Apache YARN
06-14-2016
07:12 AM
Hello, Due to the system HDD space limitations I'd like to move /usr/hdp to a separate drive for all datanodes. Anything special to be worried about or a standard procedure (mount new drive, rsync /usr/hdp to the new drive, modify fstab, reboot) would do the trick? Thanks.
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)