Member since
06-13-2016
4
Posts
1
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
30311 | 07-20-2016 10:32 AM | |
2313 | 06-14-2016 10:17 AM |
02-23-2017
12:23 PM
You don't have to stop any instance nor service on the cluster. The error messages stop after 30~60 seconds.
... View more
07-20-2016
10:32 AM
[SOLVED] I removed entries from /etc/hosts that was pointing just to host names, not to FQDN and the roles was trying to invoke kinit as user/hostname@realm, not as user/fqdn@real.
... View more
07-14-2016
10:42 AM
Hi folks, I configured my cluster to use my KDC to authenticate the services. Everything works fine, but I'm not able to access the HDFS files from command line. I've already tried the instructions at http://www.cloudera.com/documentation/archive/cdh/4-x/4-4-0/CDH4-Security-Guide/cdh4sg_topic_22_1.html, but w/o success. Error message: [hdfs@beth-1 tmp]$ export HADOOP_SECURE_DN_USER=hdfs/beth-1@beth-1
[hdfs@beth-1 tmp]$ export HADOOP_SECURE_DN_PID_DIR=/var/lib/hadoop-hdfs
[hdfs@beth-1 tmp]$ export HADOOP_SECURE_DN_LOG_DIR=/var/log/hadoop-hdfs
[hdfs@beth-1 tmp]$ export JSVC_HOME=/opt/cloudera/parcels/CDH/lib/bigtop-utils/
[hdfs@beth-1 tmp]$ kinit -k -t hdfs.keytab hdfs/beth-1
[hdfs@beth-1 tmp]$ echo $?
0
[hdfs@beth-1 tmp]$ klist
Ticket cache: FILE:/tmp/krb5cc_495
Default principal: hdfs/beth-1@beth-1
Valid starting Expires Service principal
07/14/16 14:34:41 07/15/16 14:34:41 krbtgt/beth-1@beth-1
renew until 07/21/16 14:34:41
[hdfs@beth-1 tmp]$ hdfs dfs -ls /
16/07/14 14:35:24 WARN security.UserGroupInformation: PriviledgedActionException as:hdfs/beth-1@beth-1 (auth:KERBEROS) cause:org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed
16/07/14 14:35:27 WARN security.UserGroupInformation: PriviledgedActionException as:hdfs/beth-1@beth-1 (auth:KERBEROS) cause:org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed
16/07/14 14:35:27 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
16/07/14 14:35:29 WARN security.UserGroupInformation: PriviledgedActionException as:hdfs/beth-1@beth-1 (auth:KERBEROS) cause:org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed
16/07/14 14:35:29 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
16/07/14 14:35:34 WARN security.UserGroupInformation: PriviledgedActionException as:hdfs/beth-1@beth-1 (auth:KERBEROS) cause:org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed
16/07/14 14:35:34 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
16/07/14 14:35:34 WARN security.UserGroupInformation: PriviledgedActionException as:hdfs/beth-1@beth-1 (auth:KERBEROS) cause:org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed
16/07/14 14:35:34 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
16/07/14 14:35:37 WARN security.UserGroupInformation: PriviledgedActionException as:hdfs/beth-1@beth-1 (auth:KERBEROS) cause:org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed
16/07/14 14:35:37 WARN ipc.Client: Couldn't setup connection for hdfs/beth-1@beth-1 to beth-1/10.13.9.13:8020
org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:555)
at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:370)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:725)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:721)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:720)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1525)
at org.apache.hadoop.ipc.Client.call(Client.java:1442)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2095)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1214)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1210)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1210)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:64)
at org.apache.hadoop.fs.Globber.doGlob(Globber.java:285)
at org.apache.hadoop.fs.Globber.glob(Globber.java:151)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1634)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:102)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:305)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:362)
16/07/14 14:35:37 WARN security.UserGroupInformation: PriviledgedActionException as:hdfs/beth-1@beth-1 (auth:KERBEROS) cause:java.io.IOException: Couldn't setup connection for hdfs/beth-1@beth-1 to beth-1/10.13.9.13:8020
ls: Failed on local exception: java.io.IOException: Couldn't setup connection for hdfs/beth-1@beth-1 to beth-1/10.13.9.13:8020; Host Details : local host is: "beth-1/10.13.9.13"; destination host is: "beth-1":8020;
[hdfs@beth-1 tmp]$ Can anyone, please help me?
... View more
Labels:
06-14-2016
10:17 AM
1 Kudo
[Solved] It was necessary to stop all roles running at the server. kill -9 on some remaining process (sudo lsof +d /var/run/cloudera-scm-agent/process showed these processes) I download the cloudera-scm-agent from its mirror (respecting my scenarios version) Executed: sudo service cloudera-scm-agent stop sudo umount /var/run/cloudera-scm-agent/process Reinstalled the cloudera-scm-agent package and executed: sudo service cloudera-scm-agent clean_restart This solved the issue. Obs.: Some roles where unpacked at var/run/cloudera-scm-agent/process with corrupted files (e.g.. hue.ini with +8GB size and the dir path repeated many many times into it's content)
... View more
06-13-2016
06:27 PM
Hi, I'm facing problems when I change the configuration of any service via CM, because the deployed instance at /var/run/cloudera-scm-agent/process is aways created with the old configuration. I have already tried `service cloudera-scm-agent clean_restart_confirmed` before to deploy the client configuration, but with no effect. The new path with the necessary files are aways with outdated configuration. I rebooted the machine, but the problem persists. What more should I do?
... View more
Labels:
- Labels:
-
Cloudera Manager