Member since
09-11-2015
23
Posts
25
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2463 | 06-02-2017 10:59 AM | |
1626 | 12-22-2016 04:20 PM |
03-13-2017
11:33 PM
1 Kudo
Q: Is there any way to change the user my hive jobs run as under LLAP? They always seem to run as 'hive' user. A: Hive LLAP does not currently support hive.server2.enable.doAs=true. All sessions will run under the hive account.
... View more
Labels:
03-08-2017
03:29 PM
2 Kudos
Assuming you start with a kerberized HDP cluster with Hbase installed. First check what your service principal is i.e. klist -kt /etc/security/keytabs/hbase.service.keytab
Keytab name: FILE:hbase.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
2 12/20/16 13:51:21 hbase/hdp252.hdp@HWX.COM
2 12/20/16 13:51:21 hbase/hdp252.hdp@HWX.COM
2 12/20/16 13:51:21 hbase/hdp252.hdp@HWX.COM
2 12/20/16 13:51:21 hbase/hdp252.hdp@HWX.COM
2 12/20/16 13:51:21 hbase/hdp252.hdp@HWX.COM
In Ambari head to Hbase -> Configs -> Advanced -> Custom Hbase-Site.xml and add the following new parameters with the keytab / principal substituted:
hbase.thrift.security.qop=auth
hbase.thrift.support.proxyuser=true
hbase.regionserver.thrift.http=true
hbase.thrift.keytab.file=/etc/security/keytabs/hbase.service.keytab
hbase.thrift.kerberos.principal=hbase/_HOST@HWX.COM
hbase.security.authentication.spnego.kerberos.keytab=/etc/security/keytabs/spnego.service.keytab
hbase.security.authentication.spnego.kerberos.principal=HTTP/_HOST@HDP.COM
Check that the following are set in HDFS and if not, add them to 'Custom core-site.xml' hadoop.proxyuser.hbase.groups=*
hadoop.proxyuser.hbase.hosts=* Restart the affected HBase & HDFS services. On the command line on the HBase master, kinit with the service keytab and start the thrift server: su - hbase
kinit -kt hbase.service.keytab hbase/hdp252.hdp@HWX.COM/usr/hdp/current/hbase-master/bin/hbase-daemon.sh start thrift --infoport 8086 The parameter we set earlier 'hbase.regionserver.thrift.http=true' indicates that the thrift server will be started in http mode. To start in binary mode set this to false. Logs are written to /var/log/hbase and you should see a running process To test the thrift server in http mode the syntax is: hbase org.apache.hadoop.hbase.thrift.HttpDoAsClient hdp252 9090 hbase true to test in binary mode the syntax is: hbase org.apache.hadoop.hbase.thrift.DemoClient hdp252 9090 true
... View more
Labels:
02-08-2017
10:18 AM
1 Kudo
Problem: Following upgrade to 2.5 Oozie Sqoop job which previously worked now fails with oozi-W@sqoop-aa6c] Launcher exception: org/json/JSONObject
java.lang.NoClassDefFoundError: org/json/JSONObject
at org.apache.sqoop.util.SqoopJsonUtil.getJsonStringforMap(SqoopJsonUtil.java:43)
at org.apache.sqoop.SqoopOptions.writeProperties(SqoopOptions.java:759)
at org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage.createInternal(HsqldbJobStorage.java:399)
at org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage.create(HsqldbJobStorage.java:379)
Solution: Ensure that 'java-json.jar' file exists in the following locations:
/usr/hdp/current/sqoop-client/lib
/usr/hdp/<version>/sqoop/lib Also copy this jar into HDFS and reference it in your oozie workflow: e.g. hdfs dfs -put /usr/hdp/current/sqoop-client/lib/java-json.jar /user/oozie/oozie_scripts/lib/ and change workflow.xml: <archive>/user/oozie/oozie_scripts/lib/java-json.jar#java-json.jar</archive>
... View more
Labels:
12-24-2016
03:19 PM
2 Kudos
It would sometimes be useful to adjust the log level when executing hbase client commands from command line without having to restart components. This article explains how that can be achieved: You can set DEBUG level for any client hbase command by exporting the following environment variable before it is executed: HBASE_ROOT_LOGGER=hbase.root.logger=DEBUG,console For example: # export HBASE_ROOT_LOGGER=hbase.root.logger=DEBUG,console
# hbase hbck
2016-12-24 15:18:03,240 DEBUG [main] util.Shell: setsid exited with exit code 0
2016-12-24 15:18:03,363 DEBUG [main] lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, about=, sampleName=Ops, type=DEFAULT, value=[Rate of successful kerberos logins and latency (milliseconds)], valueName=Time)
2016-12-24 15:18:03,373 DEBUG [main] lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, about=, sampleName=Ops, type=DEFAULT, value=[Rate of failed kerberos logins and latency (milliseconds)], valueName=Time)
2016-12-24 15:18:03,373 DEBUG [main] lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, about=, sampleName=Ops, type=DEFAULT, value=[GetGroups], valueName=Time)
2016-12-24 15:18:03,375 DEBUG [main] impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
2016-12-24 15:18:03,407 DEBUG [main] security.SecurityUtil: Setting hadoop.security.token.service.use_ip to true
2016-12-24 15:18:03,443 DEBUG [main] security.Groups: Creating new Groups object
2016-12-24 15:18:03,447 DEBUG [main] util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
2016-12-24 15:18:03,447 DEBUG [main] util.NativeCodeLoader: Loaded the native-hadoop library
2016-12-24 15:18:03,452 DEBUG [main] security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution
2016-12-24 15:18:03,452 DEBUG [main] security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
2016-12-24 15:18:03,542 DEBUG [main] security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
2016-12-24 15:18:03,644 DEBUG [main] security.UserGroupInformation: hadoop login
2016-12-24 15:18:03,647 DEBUG [main] security.UserGroupInformation: hadoop login commit
2016-12-24 15:18:03,649 DEBUG [main] security.UserGroupInformation: using kerberos user:hbase/dghdp253.openstacklocal@HWX.COM
2016-12-24 15:18:03,649 DEBUG [main] security.UserGroupInformation: Using user: "hbase/dghdp253.openstacklocal@HWX.COM" with name hbase/dghdp253.openstacklocal@HWX.COM
2016-12-24 15:18:03,650 DEBUG [main] security.UserGroupInformation: User entry: "hbase/dghdp253.openstacklocal@HWX.COM"
2016-12-24 15:18:03,653 DEBUG [main] security.UserGroupInformation: UGI loginUser:hbase/dghdp253.openstacklocal@HWX.COM (auth:KERBEROS)
<snip> Note, the following log levels are supported: ALL DEBUG ERROR FATAL INFO OFF TRACE TRACE_INT WARN
... View more
Labels:
12-24-2016
03:11 PM
1 Kudo
It would sometimes be useful to adjust the log level when executing hive commands from command line without having to restart components. This article explains how that can be achieved: You can provide the log level on the command line to hive as follows hive -hiveconf hive.root.logger=DEBUG,console -e "show tables ;" For example: # hive -hiveconf hive.root.logger=DEBUG,console -e "show tables ;" |more
16/12/24 15:10:23 DEBUG util.VersionInfo: version: 2.7.3.2.5.3.0-37
16/12/24 15:10:27 [main]: DEBUG common.LogUtils: Using hive-site.xml found on CLASSPATH at /etc/hive/2.5.3.0-37/0/hive-site.xml
16/12/24 15:10:27 [main]: DEBUG session.SessionState: SessionState user: null
Logging initialized using configuration in file:/etc/hive/2.5.3.0-37/0/hive-log4j.properties
16/12/24 15:10:27 [main]: INFO SessionState:
Logging initialized using configuration in file:/etc/hive/2.5.3.0-37/0/hive-log4j.properties
16/12/24 15:10:27 [main]: DEBUG parse.VariableSubstitution: Substitution is on: hive
16/12/24 15:10:27 [main]: DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)])
16/12/24 15:10:27 [main]: DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)])
16/12/24 15:10:27 [main]: DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[GetGroups])
This can be especially useful if your hive cli is hanging or slow connecting. Note, the following log levels are supported: ALL DEBUG ERROR FATAL INFO OFF TRACE TRACE_INT WARN
... View more
Labels:
12-24-2016
03:04 PM
1 Kudo
It's sometimes useful to adjust the log level when executing commands from the client to get more info about what is happening. This article explains how that can be achieved:
You can set DEBUG level for any client command by exporting the following environment variable before it is executed:
HADOOP_ROOT_LOGGER=hadoop.root.logger=DEBUG,console
Now try executing a client command and watch the stream of DEBUG info come to your terminal. For example:
# hdfs dfs -ls
16/12/24 15:01:34 DEBUG util.Shell: setsid exited with exit code 0
16/12/24 15:01:34 DEBUG conf.Configuration: parsing URL jar:file:/usr/hdp/2.5.3.0-37/hadoop/hadoop-common-2.7.3.2.5.3.0-37.jar!/core-default.xml
16/12/24 15:01:34 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@149494d8
16/12/24 15:01:34 DEBUG conf.Configuration: parsing URL file:/etc/hadoop/2.5.3.0-37/0/core-site.xml
16/12/24 15:01:34 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@28c4711c
16/12/24 15:01:35 DEBUG security.SecurityUtil: Setting hadoop.security.token.service.use_ip to true
16/12/24 15:01:35 DEBUG security.Groups: Creating new Groups object
16/12/24 15:01:35 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
16/12/24 15:01:35 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
16/12/24 15:01:35 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution
16/12/24 15:01:35 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
16/12/24 15:01:35 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
16/12/24 15:01:35 DEBUG security.UserGroupInformation: hadoop login
16/12/24 15:01:35 DEBUG security.UserGroupInformation: hadoop login commit
16/12/24 15:01:35 DEBUG security.UserGroupInformation: using kerberos user:hdfs@HWX.COM
16/12/24 15:01:35 DEBUG security.UserGroupInformation: Using user: "hdfs@HWX.COM" with name hdfs@HWX.COM
16/12/24 15:01:35 DEBUG security.UserGroupInformation: User entry: "hdfs@HWX.COM"
16/12/24 15:01:35 DEBUG security.UserGroupInformation: UGI loginUser:hdfs@HWX.COM (auth:KERBEROS)
<snip>
Note, the following log levels are supported: ALL DEBUG ERROR FATAL INFO OFF TRACE TRACE_INT WARN
... View more
12-22-2016
04:20 PM
2 Kudos
I've seen this before - we created a /tmp/hive/<id>.pipeout file for every JDBC connection, but if memory serves we failed to clean up those where no operation had been done on that JDBC connection. I believe it was fixed in the latest versions of HDP (>2.5.0) so if you're running an older version you could be running into this: BUG-46108 is what I am referring to
... View more
07-11-2016
04:00 PM
1 Kudo
As Laurent says, you need to make this change in Ambari if you are using it to manage your cluster otherwise the value in the file will be overridden. Also when you have made your change be sure to restart the affected services.
... View more
03-02-2016
09:33 AM
1 Kudo
Thanks Neeraj - how does this help with integrated security?
... View more
03-02-2016
09:19 AM
6 Kudos
I need to use sqoop on linux to pull data from SQL Server running with integrated security. Can anyone confirm that they have made this work with HDP 2.3.4 and share the steps?
... View more
Labels:
- Labels:
-
Apache Sqoop
- « Previous
-
- 1
- 2
- Next »