Member since
05-10-2016
184
Posts
60
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4080 | 05-06-2017 10:21 PM | |
4092 | 05-04-2017 08:02 PM | |
5005 | 12-28-2016 04:49 PM | |
1241 | 11-11-2016 08:09 PM | |
3318 | 10-22-2016 03:03 AM |
02-01-2017
11:52 PM
1 Kudo
SYMPTOM: hive> create table mytab (col1 int) location '/tmp/abc';
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.security.AccessControlException: Permission denied: user=dbloader, access=WRITE, inode="/user/hive":hive:hadoop:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1780)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1764)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1738)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAccess(FSNamesystem.java:8445)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkAccess(NameNodeRpcServer.java:2022)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.checkAccess(ClientNamenodeProtocolServerSideTranslatorPB.java:1451)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2206)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2202)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2200)
)
hive>
ROOT CAUSE: Misconfiguration of the property hive.metastore.warehouse.dir within hive-site.xml. The value reflects the default location where objects should be created, however, if it is set to a location such as "/user/hive" or a directory where the permissions do not apply, it could produce the exception as stated above. RESOLUTION: Ensure that the value is set to the default i.e., "/apps/hive/warehouse" or a directory where you have appropriate permissions. This error would show up with/without impersonation enabled.
... View more
Labels:
01-20-2017
09:24 PM
1 Kudo
Problem When trying to start the secondary hive metastore service, in this example, zookeeper is unable to create the znode with the appropriate permissions. This can be seen mostly on the non-Ambari managed clusters. 17/01/19 18:25:17 ERROR metastore.HiveMetaStore: Metastore Thrift Server threw an exception...
org.apache.hadoop.hive.thrift.DelegationTokenStore$TokenStoreException: Error creating path /hivedelegationMETASTORE/keys
at org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.ensurePath(ZooKeeperTokenStore.java:166)
at org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.initClientAndPaths(ZooKeeperTokenStore.java:236)
at org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.init(ZooKeeperTokenStore.java:473)
at org.apache.hadoop.hive.thrift.HiveDelegationTokenManager.startDelegationTokenSecretManager(HiveDelegationTokenManager.java:92)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6031)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5945)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /hivedelegationMETASTORE/keys
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:691)
at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:675)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:672)
at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:453)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:443)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:423)
at org.apache.curator.framework.imps.CreateBuilderImpl$3.forPath(CreateBuilderImpl.java:257)
at org.apache.curator.framework.imps.CreateBuilderImpl$3.forPath(CreateBuilderImpl.java:205)
at org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.ensurePath(ZooKeeperTokenStore.java:160)
... 11 more
Exception in thread "main" org.apache.hadoop.hive.thrift.DelegationTokenStore$TokenStoreException: Error creating path /hivedelegationMETASTORE/keys
at org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.ensurePath(ZooKeeperTokenStore.java:166)
at org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.initClientAndPaths(ZooKeeperTokenStore.java:236)
at org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.init(ZooKeeperTokenStore.java:473)
at org.apache.hadoop.hive.thrift.HiveDelegationTokenManager.startDelegationTokenSecretManager(HiveDelegationTokenManager.java:92)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6031)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5945)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /hivedelegationMETASTORE/keys
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:691)
at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:675)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:672)
at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:453)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:443)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:423)
at org.apache.curator.framework.imps.CreateBuilderImpl$3.forPath(CreateBuilderImpl.java:257)
at org.apache.curator.framework.imps.CreateBuilderImpl$3.forPath(CreateBuilderImpl.java:205)
at org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.ensurePath(ZooKeeperTokenStore.java:160)
... 11 more
17/01/19 18:25:17 INFO metastore.HiveMetaStore: Shutting down hive metastore.
What to look for/Fix The problem is usually due to the way zookeeper is configured. For instance, if zoo.cfg or zookeeper.env (java.env) in some cases have the following properties set kerberos.removeHostFromPrincipal = true
kerberos.removeRealmFromPrincipal = true
then verify the ACL on the permission via zkCli.sh. In this example, my Zookeeper namespace for hive is set to "hahs2" so here is how the permission looks like [zk: nodea.openstacklocal(CONNECTED) 1] getAcl /hahs2
'world,'anyone
: r
'sasl,'hive
: cdrwa
When the permissions are set to strip away the principal and "hive.cluster.delegation.token.store.zookeeper.acl" is not defined, the ACLs should look something like above. If this is not the case, then you would see the ACL set something like this [zk: nodea.openstacklocal(CONNECTED) 1] getAcl /hahs2
'sasl,'hive/nodea.openstacklocal@HDP.COM
: cdrwa These steps worked for me Stop the hive server processes i.e. hive metastore and hiveserver2 instances Login via zkCli.sh and try to remove the znode "rmr /hivedelegationMETASTORE" , if this gives error like "No Auth.." then you might need to modify any existence of the following properties to false. This could be there in java.env within /etc/zookeeper/conf or /apache/zookeeper/conf, based on your configuration kerberos.removeHostFromPrincipal = false
kerberos.removeRealmFromPrincipal = false
Launch zkCli again and you should be able to delete the znode Switch back the kerberos stripping properties to default and ensure there is only one place this is defined, either zoo.cfg or java.env (could be zookeeper-env.sh) in some scenarios kerberos.removeHostFromPrincipal = true
kerberos.removeRealmFromPrincipal = true Restart the Zookeeper servers (apply these changes to all the zookeeper servers) Start one of the hivemetastore processes and check if the znodes are created with appropriate permissions i.e. shortnames like this [zk: nodea.openstacklocal(CONNECTED) 1] getAcl /hivedelegationMETASTORE
'sasl,'hive
: cdrwa If this is not the case then you might need to add the following in hive-site.xml for either hive instances <name>hive.cluster.delegation.token.store.zookeeper.acl</name>
<value>sasl:hive:cdrwa</value>
Restart the hivemetastore and hiveserver2 processes. This should ideally have the ACLs with shortnames.
... View more
Labels:
01-13-2017
01:39 PM
1 Kudo
Some corrections. 1. You shouldn't use the PostgreSQL version of psql with HAWQ. While it may work, you should use the one distributed with HAWQ. You'll also want other utilities like gpfdist which are distributed with HAWQ and not part of PostgreSQL. You'll instead want to use rpm to install the utilities on an edge node. rpm -i hawq-2.1.1.0-7.el6.x86_64.rpm 2. Do not use pg_ctl with HAWQ. You should either use Ambari to restart the HAWQ service or use the "hawq" command in a terminal window. pg_ctl will probably stop being distributed with HAWQ in the future. hawq stop cluster -u -a -u means to update the config -a means to do it silently 3. An easier way to allow external connections is to add this to the end of the pg_hba.conf file: host all all 0.0.0.0/0 md5 This means that all external connections will require an encrypted password to authenticate. This is the database password too and not the operating system password. psql -c "alter user gpadmin password 'secret'" That will change the gpadmin password in the database to secret.
... View more
11-02-2017
07:10 PM
Yes had the same issue with comma seperated values,Pipe seperation did fix it. Thanks
... View more
03-01-2017
11:16 PM
Its not related to HDP 2.5.0, I just encountered the same on 2.4.3. Was resolved for me after changing ambari.properties agent.threadpool.size.max
client.threadpool.size.max
The value should be mapped to actual CPU core count. Also, increased the heap size for namenode and datanode to 2 GB from the default 1GB value.
... View more
02-08-2017
06:50 PM
2 Kudos
@Sami Ahmad: You have an HDFS policy which does not grant permissions to your user for viewing resources. In most of the components, this would boil down to access request being denied. However, in HDFS, if a Ranger policy does not grant access to a resource, native Hadoop privileges are checked as well. If HDFS grants user 'SAMI' access to resources, 'SAMI' will be able to access the same (inspite of Ranger policy not granting permission). You can check whether its Ranger policy responsible for your user being able to view resources or its native Hadoop ACLs through Audit page->Access tab. In screenshot, Policy ID is -- and also, Access Enforcer=hadoop-acl which means the user had access through native Hadoop ACL. None of the Ranger Hadoop policies are responsible for the Access/ Deny. Hope this helps. screen-shot-2017-02-08-at-104435-am.png
... View more
11-11-2016
02:17 PM
1 Kudo
For tez "tasks" represent map operations or reduce operations. A DAG is a full workflow (job) of vertices (processing of tasks) and edges (data movement between vertices). See these links for a more detailed discussion: http://hortonworks.com/blog/expressing-data-processing-in-apache-tez/
https://community.hortonworks.com/questions/32164/question-on-tez-dag-task-and-pig-on-tez.html https://cwiki.apache.org/confluence/display/TEZ/How+initial+task+parallelism+works You can see number of tasks on the console output: You can also see this in Ambari Tez view (and drill down for greater details) See this for understanding Ambari Tez view: https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_ambari_views_guide/content/section_using_tez_view.html
... View more
09-06-2017
04:36 PM
Where are steps to configure R-server for Kerberos? Does it not need any kerberos related settings? No kinit etc..? I noticed that the connect string has kerberos related variables though. Is it really good enough? Please confirm.
rhive.connect(host="node1.hortonworks.com:10000/default;principal=hive/node1.hortonworks.com@HDP.COM;AuthMech=1;KrbHostFQDN=service.hortonworks.com;KrbServiceName=hive;KrbRealm=HDP.COM", defaultFS="hdfs://node1.hortonworks.com/rhive", hiveServer2=TRUE,updateJar=FALSE))
... View more
04-04-2018
04:20 PM
Hello i've performend all steps creating and granting privileges. But it still says this. But if I try to login using that user 'hive'@'metastore_fqdn' I'm able to login using password provided. But if I restart the hive it still says same error. Please some help. File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.0-205/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.0-205/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:mysql://vm-hadoop-x1.fhills.local/hive
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
Underlying cause: java.sql.SQLException : Access denied for user 'hive'@'vm-hadoop-s3.fhills.local' (using password: YES)
SQL Error code: 1045
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:80)
at org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:133)
at org.apache.hive.beeline.HiveSchemaTool.testConnectionToMetastore(HiveSchemaTool.java:187)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:291)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:277)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:526)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.sql.SQLException: Access denied for user 'hive'@'vm-hadoop-s3.fhills.local' (using password: YES)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1078)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4187)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4119)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:927)
at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1709)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1252)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2488)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2521)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2306)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:839)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:421)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:350)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:76)
... 11 more
... View more
10-26-2016
11:05 PM
Problem Using rhive with following parameters to connect to hive library(RHive)
Sys.setenv(HADOOP_CONF_DIR="/etc/hadoop/conf")
Sys.setenv(RHIVE_HIVESERVER_VERSION="2")
Sys.setenv(HIVE_HOME="/usr/hdp/current/hive-client")
Sys.setenv(HADOOP_HOME="/usr/hdp/current/hadoop-client")
rhive.init()
> rhive.connect()
2016-10-26 13:06:22,836 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1173)) - fs.default.name is deprecated. Instead, use fs.defaultFS
2016-10-26 13:06:23,720 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-10-26 13:06:24,631 WARN [main] shortcircuit.DomainSocketFactory (DomainSocketFactory.java:<init>(117)) - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
2016-10-26 13:06:25,237 INFO [Thread-5] jdbc.Utils (Utils.java:parseURL(309)) - Supplied authorities: 127.0.0.1:10000
2016-10-26 13:06:25,240 INFO [Thread-5] jdbc.Utils (Utils.java:parseURL(397)) - Resolved authority: 127.0.0.1:10000
2016-10-26 13:06:25,292 INFO [Thread-5] jdbc.HiveConnection (HiveConnection.java:openTransport(209)) - Will try to open client transport with JDBC Uri: jdbc:hive2://127.0.0.1:10000/default
Exception in thread "Thread-5" java.lang.RuntimeException: org.apache.hive.service.cli.HiveSQLException: Failed to open new session: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Username: 'anonymous' not found. Make sure your client's username exists on the cluster
at com.nexr.rhive.hive.HiveJdbcClient$HiveJdbcConnector.connect(HiveJdbcClient.java:332)
at com.nexr.rhive.hive.HiveJdbcClient$HiveJdbcConnector.run(HiveJdbcClient.java:314)
Caused by: org.apache.hive.service.cli.HiveSQLException: Failed to open new session: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Username: 'anonymous' not found. Make sure your client's username exists on the cluster
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:255)
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:246)
at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:592)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:195)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at com.nexr.rhive.hive.DatabaseConnection.connect(DatabaseConnection.java:52)
at com.nexr.rhive.hive.HiveJdbcClient$HiveJdbcConnector.connect(HiveJdbcClient.java:325)
... 1 more
Caused by: org.apache.hive.service.cli.HiveSQLException: Failed to open new session: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Username: 'anonymous' not found. Make sure your client's username exists on the cluster
at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:266)
at org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:202)
at org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:402)
at org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:297)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1253)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1238)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Username: 'anonymous' not found. Make sure your client's username exists on the cluster
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:83)
at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
at com.sun.proxy.$Proxy20.open(Unknown Source)
at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:258)
... 12 more
Caused by: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Username: 'anonymous' not found. Make sure your client's username exists on the cluster
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:137)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
... 20 more
Caused by: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException:Username: 'anonymous' not found. Make sure your client's username exists on the cluster
at org.apache.hadoop.ipc.Client.call(Client.java:1427)
at org.apache.hadoop.ipc.Client.call(Client.java:1358)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:596)
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
... 25 more
Resolution The mechanism to pass on a user name resolves the issue. Here is the example of the passing the connection string which works for rhive. library(RHive)
Sys.setenv(HADOOP_CONF_DIR="/etc/hadoop/conf")
Sys.setenv(RHIVE_HIVESERVER_VERSION="2")
Sys.setenv(HIVE_HOME="/usr/hdp/current/hive-client")
Sys.setenv(HADOOP_HOME="/usr/hdp/current/hadoop-client")
rhive.init()
rhiveConnection<-rhive.connect(host="127.0.0.1", port=10000, hiveServer2=NA, defaultFS=NULL, updateJar=FALSE, user="hive", password=NULL, db="default", properties = character(0))
NOTE: Ensure following parameters are set correctly JAVA_HOME PATH should pick our JAVA_HOME's binary so "export PATH=$JAVA_HOME/bin:${PATH}" Rhive looks for binaries, thus when setting the HIVE_HOME and HADOOP_HOME, ensure that hive-client and hadoop-client directories are being picked. Verify environment variables for rhive using rhive.env() Very important to use doube quotes "" for enclosing the string/character values, like user/database name
... View more
Labels: