Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1953 | 07-09-2019 12:53 AM | |
| 11803 | 06-23-2019 08:37 PM | |
| 9093 | 06-18-2019 11:28 PM | |
| 10045 | 05-23-2019 08:46 PM | |
| 4454 | 05-20-2019 01:14 AM |
01-04-2016
03:21 AM
Hi Harsh, @Harsh J wrote: Could you re-run the command also with the below env set? $ export HADOOP_ROOT_LOGGER=TRACE,console $ export HADOOP_OPTS="-Dsun.security.krb5.debug=true -Djavax.net.debug=ssl" $ hadoop fs -ls / Here is the result: 16/01/04 17:42:07 DEBUG util.Shell: setsid exited with exit code 0
16/01/04 17:42:07 DEBUG conf.Configuration: parsing URL jar:file:/app/hadoop-2.6.0-cdh5.4.5/share/hadoop/common/hadoop-common-2.6.0-cdh5.4.5.jar!/core-default.xml
16/01/04 17:42:07 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@4ae69619
16/01/04 17:42:07 DEBUG conf.Configuration: parsing URL file:/home/user01/yarn-conf/core-site.xml
16/01/04 17:42:07 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@30317bdd
16/01/04 17:42:08 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false, sampleName=Ops)
16/01/04 17:42:08 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false, sampleName=Ops)
16/01/04 17:42:08 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[GetGroups], about=, type=DEFAULT, always=false, sampleName=Ops)
16/01/04 17:42:08 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
Java config name: null
Native config name: /etc/krb5.conf
Loaded from native config
16/01/04 17:42:08 DEBUG security.Groups: Creating new Groups object
16/01/04 17:42:08 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000; warningDeltaMs=5000
>>>KinitOptions cache name is /tmp/krb5cc_501
>>>DEBUG <CCacheInputStream> client principal is user01@DEVELOPMENT.COM
>>>DEBUG <CCacheInputStream> server principal is krbtgt/DEVELOPMENT.COM@DEVELOPMENT.COM
>>>DEBUG <CCacheInputStream> key type: 23
>>>DEBUG <CCacheInputStream> auth time: Mon Jan 04 17:41:23 WIB 2016
>>>DEBUG <CCacheInputStream> start time: Mon Jan 04 17:41:06 WIB 2016
>>>DEBUG <CCacheInputStream> end time: Tue Jan 05 03:41:23 WIB 2016
>>>DEBUG <CCacheInputStream> renew_till time: Mon Jan 11 17:41:06 WIB 2016
>>> CCacheInputStream: readFlags() FORWARDABLE; RENEWABLE; INITIAL; PRE_AUTH;
16/01/04 17:42:08 DEBUG security.UserGroupInformation: hadoop login
16/01/04 17:42:08 DEBUG security.UserGroupInformation: hadoop login commit
16/01/04 17:42:08 DEBUG security.UserGroupInformation: using kerberos user:user01@DEVELOPMENT.COM
16/01/04 17:42:08 DEBUG security.UserGroupInformation: Using user: "user01@DEVELOPMENT.COM" with name user01@DEVELOPMENT.COM
16/01/04 17:42:08 DEBUG security.UserGroupInformation: failure to login
javax.security.auth.login.LoginException: java.lang.IllegalArgumentException: Illegal principal name user01@DEVELOPMENT.COM: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to user01@DEVELOPMENT.COM
at org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:762)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:690)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:687)
at javax.security.auth.login.LoginContext.login(LoginContext.java:596)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:812)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2753)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2745)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:224)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:207)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:100)
at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
Caused by: java.lang.IllegalArgumentException: Illegal principal name user01@DEVELOPMENT.COM: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to user01@DEVELOPMENT.COM
at org.apache.hadoop.security.User.<init>(User.java:50)
at org.apache.hadoop.security.User.<init>(User.java:43)
at org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:197)
... 30 more
Caused by: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to user01@DEVELOPMENT.COM
at org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:389)
at org.apache.hadoop.security.User.<init>(User.java:48)
... 32 more
ls: failure to login From logs above shows that kerberos client config is still pointed to default /etc/krb5.conf. I use different path by exporting env variable KRB5_CONFIG. After edit /etc/krb5.conf to the proper value, its now works properly. I can browse HDFS and submit job to YARN. @Harsh J wrote: Is this remote host also carrying the Unlimited JCE policy jars under its JDK, so it may use AES-256 if that is in use? I use JDK from cloudera: jdk1.7.0_67-cloudera Thank you very much Harsh.
... View more
01-03-2016
09:17 PM
FSCK prints the full identifier of a block, which is useful in some contexts depending on what you're about to troubleshoot or investigate. Here's a break down: BP-929597290-192.0.0.2-1439573305237 = This is a BlockPool (BP) ID. Its the mark of a NameNode's ownership of the block in question. You might recall that HDFS now supports federated namespaces, wherein multiple NameNodes may be served by a single DataNode. This ID is how each NameNode is uniquely identified to be the owner of a held block ID. Even though you do not explicitly utilise federation, the block-pool concept is now inbuilt into the identifier design of HDFS by default. See http://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/Federation.html#Multiple_NamenodesNamespaces blk_1074084574_344316 = This is the block ID (blk_X_Y). Each block under every file is uniquely identified by a number X and a sub-number Y (generation stamp). More on block IDs and HDFS architecture can be read in the AOS book: http://aosabook.org/en/hdfs.html DS-730a75d3-046c-4254-990a-4eee9520424f,DISK = This is a storage identifier ID. It helps tell that on the specified DN IP:PORT, which disk (hashed identifier) is actually the one holding the data, and what is the type of the disk (DISK). HDFS now supports tiered storage, in which this comes useful: http://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html (aside of other things).
... View more
12-28-2015
07:32 AM
Thank you for the help, Harsh. Have updated to version 1.8.0_65 which worked. Thanks Snm1523
... View more
12-21-2015
02:13 PM
Hi, I am also facing similar issue can you tell me where I need to add hive-contib.jar , We are using cdh5.4.2. Please let me know the exact path. Thanks!!
... View more
12-21-2015
02:48 AM
Thanks a lot. It works.
... View more
12-17-2015
01:38 AM
Hello, Yes I can see my application_id logs . However, frstly it shows the message : Logs not available at /tmp/logs/hdfs/logs/application_1449728267224_0138 Log aggregation has not completed or is not enabled. and then it recharge the page and it shows the good logs: SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/disk2/yarn/nm/filecache/7107/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Dec 15, 2015 10:16:44 AM com.google.inject.servlet.InternalServletModule$BackwardsCompatibleServletContextProvider get
WARNING: You are attempting to use a deprecated API (specifically, attempting to @Inject ServletContext inside an eagerly created singleton. While we allow this for backwards compatibility, be warned that this MAY have unexpected behavior if you have more than one injector (with ServletModule) running in the same JVM. Please consult the Guice documentation at http://code.google.com/p/google-guice/wiki/Servlets for more information.
Dec 15, 2015 10:16:44 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class
Dec 15, 2015 10:16:44 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class
Dec 15, 2015 10:16:44 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering
.......
(What I'm searching for..) Thank you! Alina
... View more
12-16-2015
08:42 AM
Hello, So the Cell TTL is stored via Cell TTL tags.When I retrive it fails to get Cell Tags for TTL but the Cell TTL feature does work . Can you point me to what I am doing wrong to retrieve the Cell TTL . We are using hbase-1.0.0-cdh5.4.2. I fail to get the Cell Tag for TTL when I iterate over the cell Tags. During PUT we do put.setTTL( ); Table table.. Get get = new Get(Bytes.toBytes(<key>)); Result rs = table.get(get); if (rs!=null) { Cell cell = rs.getColumnCells(..); Iterator<Tag> i = CellUtil.tagsIterator(cell.getTagsArray(), cell.getTagsOffset(), cell.getTagsLength()); while (i.hasNext()) { Tag t = i.next(); if (TagType.TTL_TAG_TYPE == t.getType()) { long ts = cell.getTimestamp(); assert t.getTagLength() == Bytes.SIZEOF_LONG; long ttl = Bytes.toLong(t.getBuffer(), t.getTagOffset(), t.getTagLength()); System.out.println(ttl); } } Thanks in advance.
... View more
12-14-2015
09:20 AM
I found out my YARN deployment got messed up because I didn't add more NodeManager in after I added new hosts manually. Ops!
... View more
12-12-2015
12:29 PM
What version of CM are you using, and have you attempted recently to redeploy Spark gateway client configs? The below is what I have out of the box in CM 5.5: #!/usr/bin/env bash
##
# Generated by Cloudera Manager and should not be modified directly
##
SELF="$(cd $(dirname $BASH_SOURCE) && pwd)"
if [ -z "$SPARK_CONF_DIR" ]; then
export SPARK_CONF_DIR="$SELF"
fi
export SPARK_HOME=/opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/lib/spark
export DEFAULT_HADOOP_HOME=/opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/lib/hadoop
### Path of Spark assembly jar in HDFS
export SPARK_JAR_HDFS_PATH=${SPARK_JAR_HDFS_PATH:-''}
### Some definitions needed by older versions of CDH.
export SPARK_LAUNCH_WITH_SCALA=0
export SPARK_LIBRARY_PATH=${SPARK_HOME}/lib
export SCALA_LIBRARY_PATH=${SPARK_HOME}/lib
SPARK_PYTHON_PATH=""
if [ -n "$SPARK_PYTHON_PATH" ]; then
export PYTHONPATH="$PYTHONPATH:$SPARK_PYTHON_PATH"
fi
export HADOOP_HOME=${HADOOP_HOME:-$DEFAULT_HADOOP_HOME}
if [ -n "$HADOOP_HOME" ]; then
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${HADOOP_HOME}/lib/native
fi
SPARK_EXTRA_LIB_PATH="/opt/cloudera/parcels/GPLEXTRAS-5.5.0-1.cdh5.5.0.p0.7/lib/hadoop/lib/native"
if [ -n "$SPARK_EXTRA_LIB_PATH" ]; then
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$SPARK_EXTRA_LIB_PATH
fi
export LD_LIBRARY_PATH
HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-$SPARK_CONF_DIR/yarn-conf}
HIVE_CONF_DIR=${HIVE_CONF_DIR:-/etc/hive/conf}
if [ -d "$HIVE_CONF_DIR" ]; then
HADOOP_CONF_DIR="$HADOOP_CONF_DIR:$HIVE_CONF_DIR"
fi
export HADOOP_CONF_DIR
PYLIB="$SPARK_HOME/python/lib"
if [ -f "$PYLIB/pyspark.zip" ]; then
PYSPARK_ARCHIVES_PATH=
for lib in "$PYLIB"/*.zip; do
if [ -n "$PYSPARK_ARCHIVES_PATH" ]; then
PYSPARK_ARCHIVES_PATH="$PYSPARK_ARCHIVES_PATH,local:$lib"
else
PYSPARK_ARCHIVES_PATH="local:$lib"
fi
done
export PYSPARK_ARCHIVES_PATH
fi
# Set distribution classpath. This is only used in CDH 5.3 and later.
export SPARK_DIST_CLASSPATH=$(paste -sd: "$SELF/classpath.txt")
... View more
12-09-2015
07:32 AM
The role level APIs carry the state, but you're querying the service level. Use the role IDs from the service level to then query the roles directly.
... View more