Member since
03-14-2017
22
Posts
1
Kudos Received
0
Solutions
10-16-2017
04:12 PM
We have Ranger installed on hadoop cluster and had to set doas to false, due to which all the hive jobs running from end users where shown as hive user in RM webui and in Ranger audit. Due to which, we are unable to do auditing. We would like to set doas to true, are their any impacts on cluster or on any other services/components for setting doas to true.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Ranger
10-16-2017
04:06 PM
While i was installing HDP 2.5.3, in the 'install services' section, i was asked to select below services on nodes in the cluster. But am not sure about below services to be installed on what node(Master, data, edge). 1. Phoenix Query Server 2. Supervisor 3. Flume 4. Accumulo TServer 5. Livy Server 6. Spark Thrift Server Could any one clarify, the services to be installed on what type of node and on how nodes does the service needs to be running.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
10-16-2017
03:14 PM
@Dinesh Chitlangia When doas is set to false, all the hive jobs were ran as hive user, verified in RangerAudit and RM WebUI, hive jobs were shown as ran with hive user, than how to do auditing, like, which user has submitted what job at what time etc. If this is not possible, than this looks to be having a negative in hadoop.
... View more
10-15-2017
02:59 PM
In hive, when doas set to true, hive jobs are running as enduser(user who is executing the job) and yarn acl's set for queue are in affect but when doas set to false, all the hive jobs were run as hive user and yarn acl's are not in affect on enduser running the job.
In below scenario(doas set to false), user 'user02' when running job in 'engineering01' Queue, where only 'user02' can submit application in Q, but hive job is failing with "User hive cannot submit applications to queue root.engineering01" error.
In this scenario, how does yarn acl's will affect for enduser and granting hive user to submit application is every queue is not applicable.
================
master01:~ # su - user02
-----------------
user02@master01:/root> mapred queue -showacls
17/10/15 19:42:27 INFO impl.TimelineClientImpl: Timeline service address: http://master01.teradata.com:8188/ws/v1/timeline/
17/10/15 19:42:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
Queue acls for user : user02
Queue Operations
=====================
root
default SUBMIT_APPLICATIONS
engineering01 SUBMIT_APPLICATIONS
support01 ADMINISTER_QUEUE,SUBMIT_APPLICATIONS
--------------------
user02@master01:/root> beeline -u "jdbc:hive2://localhost:10000/default" -n user02 -p user02
WARNING: Use "yarn jar" to launch YARN applications.
Connecting to jdbc:hive2://localhost:10000/default
Connected to: Apache Hive (version 1.2.1.2.3.4.0-3485)
Driver: Hive JDBC (version 1.2.1.2.3.4.0-3485)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.2.1.2.3.4.0-3485 by Apache Hive
[INFO] Unable to bind key for unsupported operation: backward-delete-word
[INFO] Unable to bind key for unsupported operation: backward-delete-word
[INFO] Unable to bind key for unsupported operation: down-history
[INFO] Unable to bind key for unsupported operation: up-history
[INFO] Unable to bind key for unsupported operation: up-history
[INFO] Unable to bind key for unsupported operation: down-history
[INFO] Unable to bind key for unsupported operation: up-history
[INFO] Unable to bind key for unsupported operation: down-history
[INFO] Unable to bind key for unsupported operation: up-history
[INFO] Unable to bind key for unsupported operation: down-history
[INFO] Unable to bind key for unsupported operation: up-history
[INFO] Unable to bind key for unsupported operation: down-history
0: jdbc:hive2://localhost:10000/default> set tez.queue.name=engineering01;
No rows affected (0.061 seconds)
0: jdbc:hive2://localhost:10000/default> create table test09 as select * from employee01;
INFO : Tez session hasn't been created yet. Opening session
ERROR : Failed to execute tez graph.
org.apache.tez.dag.api.TezException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1508070646645_0021 to YARN : org.apache.hadoop.security.AccessControlException: User hive cannot submit applications to queue root.engineering01
at org.apache.tez.client.TezClient.start(TezClient.java:413)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:196)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:271)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:151)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1703)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1460)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1101)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1096)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154)
at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71)
at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1508070646645_0021 to YARN : org.apache.hadoop.security.AccessControlException: User hive cannot submit applications to queue root.engineering01
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:271)
at org.apache.tez.client.TezYarnClient.submitApplication(TezYarnClient.java:72)
at org.apache.tez.client.TezClient.start(TezClient.java:408)
... 22 more
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask (state=08S01,code=1)
0: jdbc:hive2://localhost:10000/default>
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache YARN
04-14-2017
02:59 PM
@Deepesh Below is the output of grep command in webhcat log directory. Can you kindly suggest further.
/log/webhcat # grep "hive.metastore.uris" * webhcat.log.2017-03-29:templeton.hive.properties=hive.metastore.local=false, hive.metastore.uris=thrift://%HOSTGROUP::host_group_master1%:9933, hive.metastore.sasl.enabled=false,hive.metastore.execute.setugi=true,hive.metastore.uris=thrift://m02:9083\,thrift://m01:9083,hive.metastore.sasl.enabled=false,hive.metastore.execute.setugi=true,hive.execution.engine=tez
webhcat.log.2017-03-29:hive.metastore.uris=thrift://m02:9083,thrift://m01:9083
webhcat.log.2017-04-12:templeton.hive.properties=hive.metastore.local=false, hive.metastore.uris=thrift://%HOSTGROUP::host_group_master1%:9933, hive.metastore.sasl.enabled=false, hive.metastore.execute.setugi=true,hive.metastore.uris=thrift://m02:9083\,thrift://m01:9083,hive.metastore.sasl.enabled=false,hive.metastore.execute.setugi=true,hive.execution.engine=tez webhcat.log.2017-04-12:hive.metastore.uris=thrift://m02:9083,thrift://m01:9083
... View more
04-11-2017
02:59 PM
@Deepesh Grep command didn't gave any output. Is their a way to confirm the templeton status. Can you suggest further on this. ============ # grep "hive.metastore.uris" webhcat.log ; echo $? 1 ==============
... View more
04-11-2017
12:36 PM
@Deepesh I don't see any output for the grep command. And can you kindly let me know, how to verify webhcat is working fine or not. ================ # grep "hive.metastore.uris" webhcat.log ; echo $? 1 ================= let me know, if you need any further information for this.
... View more
04-10-2017
05:04 PM
@Deepesh
## As per below webhcat status is ok. curl http://localhost:50111/templeton/v1/status {"version":"v1","status":"ok"}
## And below is the command curl command ran to access table and below is the webhcat log and i don't see any issues in the logs. Can you kindly suggest further. Command:
curl -s -d execute="select+*+from+employee01;" -d statusdir="/user/root" 'http://localhost:50111/templeton/v1/hive?user.name=root' webhcat.log:
INFO| 10 Apr 2017 22:27:06,321 | org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl | Timeline service address: http://localhost:50111/templeton/v1/status
INFO | 10 Apr 2017 22:27:06,513 | org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl | Timeline service address: http://localhost:50111/templeton/v1/status
INFO | 10 Apr 2017 22:27:06,613 | org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl | Timeline service address: http://localhost:50111/templeton/v1/status
INFO | 10 Apr 2017 22:27:06,621 | org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider | Failing over to rm2
INFO | 10 Apr 2017 22:27:26,063 | org.apache.hadoop.mapreduce.JobSubmitter | number of splits:1
INFO | 10 Apr 2017 22:27:26,164 | org.apache.hadoop.mapreduce.JobSubmitter | Submitting tokens for job: job_1491487510034_0009
INFO | 10 Apr 2017 22:27:26,216 | org.apache.hadoop.yarn.client.api.impl.YarnClientImpl | Submitted application application_1491487510034_0009
INFO | 10 Apr 2017 22:27:26,218 | org.apache.hadoop.mapreduce.Job | The url to track the job: http://localhost:50111/templeton/v1/status
INFO | 10 Apr 2017 22:27:26,219 | org.apache.curator.framework.imps.CuratorFrameworkImpl | Starting
INFO | 10 Apr 2017 22:27:26,224 | org.apache.curator.framework.state.ConnectionStateManager | State change: CONNECTED
... View more
04-06-2017
02:55 PM
When i try to execute below hive rest api to access hive tables, in "stderr" am seeing below error.
curl -s -d execute="select+*+from+employee01;" -d statusdir="/user/root/" 'http://localhost:50111/templeton/v1/hive?user.name=root'
{"id":"job_1491487510034_0007"}
===========================
# hadoop fs -cat stderr
WARNING: Use "yarn jar" to launch YARN applications.
Logging initialized using configuration in jar:file:/data3/hadoop/yarn/local/usercache/root/filecache/37/hive-common.jar!/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:494)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:680)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1533)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3000)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3019)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:475)
... 8 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)
... 14 more
Caused by: MetaException(message:Got exception: java.net.URISyntaxException Malformed escape pair at index 9: thrift://%HOSTGROUP::host_group_master1%:9933)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1223)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:229)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
... 19 more
===================================
# hadoop fs -cat stdout ------> (No output)
# Templeton status is ok.
curl http://localhost:50111/templeton/v1/status
{"version":"v1","status":"ok"}
... View more
- Tags:
- Data Processing
- HDFS
- Upgrade to HDP 2.5.3 : ConcurrentModificationException When Executing Insert Overwrite : Hive
- WebHCat
- webhcatalog_templeton
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
04-06-2017
02:17 PM
@mqureshi @Namit Maheshwari @David Streever @Neeraj Sabharwal
Finally, i was able to below distcp from insecure to secure cluster by running the below distcp command from secured cluster. ## hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true hdfs://<insecure_hdp>/test01.txt hdfs://<secure_hdp>/user/hdfs And when i do below distcp from insecure to secure hadoop cluster on a insecure cluster, i was getting error "SIMPLE authentication is not enabled." ## hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true
hdfs://<insecure_hdp>/test01.txt
hdfs://<secure_hdp>/user/hdfs Can anyone tell, whats the difference in running the same distcp command in secure and insecure cluster, as on insecure cluster it fails.
... View more
04-06-2017
02:12 PM
@Namit Maheshwari I have followed that article but still i was facing the same issue.
... View more
04-06-2017
02:10 PM
@mqureshi Thanks for your inputs, but you had reviewed my properly i had already set the fallback property in distcp command.
... View more
04-04-2017
07:03 PM
Am doing distcp from insecure to secure hadop cluster and am getting error "SIMPLE authentication isnot enabled". Can any one suggest. hdfs@master02:~> hadoop distcp -Dipc.client.fallback-to-simple-auth-allowed=true hdfs://HDP23:8020/test01.txt hdfs://HDP24:8020/ 17/04/0500:09:28 ERROR tools.DistCp:Invalid arguments:org.apache.hadoop.security.AccessControlException: SIMPLE authentication isnot enabled.Available:[TOKEN, KERBEROS]
... View more
04-04-2017
06:58 PM
## Trying to distcp from insecure hadoop cluster(2.3.4) to secure(2.4.2) hadoop cluster and it's failing with below error "Invalid arguments: SIMPLE authentication is not enabled"
## Can any one confirm, is it possible to distcp from insecure to secure hadoop cluster as i have tried with secure to insecure and it's working fine. And also below link is the hdp distcp matrix and where in "HDP2.x" only from secure to insecure hdfs was successfully but no information about insecure to secure.
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_Sys_Admin_Guides/content/ref-cfb69f75-d06f-46a2-862f-efeba959b152.1.html
## If anyone has tested insecure to secure, can you kindly confirm this. And is their any configuration required for insecure to secure distcp.
## And, while doing insecure to secure distcp, it's failing with "SIMPLE authentication is not enabled". I hope it's saying about target cluster(secure) and is their any way to enable simple authentication on secure cluster.
## And also below distcp throws "Invalid arguments" error, any idea why distcp is throwing this error, am i missing anything in the distcp.
===========================
hdfs@master02:~> hadoop distcp -Dipc.client.fallback-to-simple-auth-allowed=true hdfs://HDP23:8020/test01.txt hdfs://HDP24:8020/
17/04/05 00:09:28 ERROR tools.DistCp: Invalid arguments:
org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2118)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1315)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1311)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
at org.apache.hadoop.tools.DistCp.setTargetPathExists(DistCp.java:217)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:116)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:430)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client.call(Client.java:1427)
at org.apache.hadoop.ipc.Client.call(Client.java:1358)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
... 9 more
Invalid arguments: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
usage: distcp OPTIONS [source_path...] <target_path>
========================================
... View more
Labels:
- Labels:
-
Apache Hadoop
03-25-2017
02:44 PM
@Alex Miller Am facing an issue, where irrespective of users defined for the queue all the users were able to run jobs in the queue. And i came across this article and tried to deny all users in root queue by entering space in root queue submit applications from 'Ambari Yarn queue manager' but in submit applications space character is not accepting. Could you kindly let us know, how to use space in submit_applications to deny access to users.
... View more
03-25-2017
08:35 AM
@Namit Maheshwari @Neeraj Sabharwal Thanks for the information, but in Yarn Queue Manager the user in 'Submit Applications' for a queue is unable to accept space in it. Is their a way to do it and have you ever tried it internally, just checking and even tried to enter space manually in capacity-scheduler.xml file like below and it didn't worked. <property> <name>yarn.scheduler.capacity.root.acl_submit_applications</name> <value> </value> </property>
... View more
03-24-2017
07:08 PM
I have set-up ACL for a yarn queue(q01) from Ambari 'yarn queue manager' to allow only one user(user1) to submit jobs into the Queue. But irrespective of ACL's to the queue, all the users were able to submit jobs to the queue. Kindly let me know, is anything wrong am i doing here or any other configuration I missed.
Like below, aCL setup for queue 'q01' and parent queue root for user 'user1' to submit jobs:
=========================
<property>
<name>yarn.scheduler.capacity.root.acl_submit_applications</name>
<value>user1 </value>
</property>
<property>
<name>yarn.scheduler.capacity.root.q01.acl_submit_applications</name>
<value>user1 </value>
</property>
========================
Scenario 1: As per above ACL to q01, only user1 should be able to submit job but user2 was also able to submit job to q01 in below scenario2.
=================================
beeline -u jdbc:hive2://localhost:10000/default -n user1 -p user1 --hiveconf hive.execution.engine=mr
0: jdbc:hive2://localhost:10000/default> set mapred.job.queue.name=q01;
No rows affected (0.089 seconds)
insert into test_u01 values (1);
INFO : Table default.test_u01 stats: [numFiles=42, numRows=42, totalSize=84, rawDataSize=42]
No rows affected (21.783 seconds)
Scenario 2:
==================================
beeline -u jdbc:hive2://localhost:10000/default -n user2 -p user2 --hiveconf hive.execution.engine=mr
set mapred.job.queue.name=q01;
0: jdbc:hive2://localhost:10000/default> insert into test_u01 values (1);
INFO : Number of reduce tasks is set to 0 since there's no reduce operator
INFO : Table default.test_u01 stats: [numFiles=43, numRows=43, totalSize=86, rawDataSize=43]
No rows affected (21.616 seconds)
===================================
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Apache YARN
03-24-2017
06:01 PM
@Deepesh I have verified in ResourceManager UI and the queues were in place and i have even pasted the 'mapred queue -list' command output. My concern was to, how the queue names needs to used for running the jobs in to the specified queue but not the queues should to reflected.
... View more
03-24-2017
04:27 PM
@Deepesh Am adding the Q's from Yarn Queue Manager, so after adding the Q's am refreshing the capacity scheduler.
... View more
03-24-2017
04:18 PM
After configuring below queue's in yarn and after submitting the jobs to created Queue's, the jobs were failing with below error.
ERROR:
Failed to submit application_XXXXXX to YARN : Application application_XXXXXX submitted by user to unknown queue: root.q01
## Queue's created in YARN:
----------------------------------------
hdfs@master01:~> mapred queue -list
======================
Queue Name : default
Queue State : stopped
Scheduling Info : Capacity: 0.0, MaximumCapacity: 50.0, CurrentCapacity: 0.0
======================
Queue Name : q01
Queue State : running
Scheduling Info : Capacity: 50.0, MaximumCapacity: 60.000004, CurrentCapacity: 0.0
======================
Queue Name : q02
Queue State : running
Scheduling Info : Capacity: 50.0, MaximumCapacity: 50.0, CurrentCapacity: 0.0
======================
Queue Name : child02
Queue State : running
Scheduling Info : Capacity: 100.0, MaximumCapacity: 100.0, CurrentCapacity: 0.0
-----------------------------------------
## Below are the few scenarios, where jobs failed to submit to Q with unknown Queue.
Scenario 1: JOB failed to submit, when Q named was provided by appending with parent root Q name(root.q01).
--------------------------------------------
set mapred.job.queue.name=root.q01;
insert into test_u01 values (1);
Failed to submit application_1470318759626_0046 to YARN : Application application_1470318759626_0046 submitted by user user1 to unknown queue: root.q01
Scenario 2: Job executed successfully, when Q named was provided only with child Q(q01).
======================================
set mapred.job.queue.name=q01;
insert into test_u01 values (1);
INFO : Table default.test_u01 stats: [numFiles=40, numRows=40, totalSize=80, rawDataSize=40]
No rows affected (20.125 seconds)
======================================
Scenario 3: Need to execute job into child of a parent Q and where the parent Q is child of root Q. I would to execute the job in child02 which was created in the above Q's but with below Q name it error.
======================================
set mapred.job.queue.name=q02.child02;
insert into test_u01 values (1);
Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1470318759626_0049 to YARN : Application application_1470318759626_0049 submitted by user user1 to unknown queue: q02.child02
=====================================
Can anyone, please explain how to use Queue names while executing mapreduce jobs and what is the best source to get the actual Q names. And for above scenario 3, what should be the Q name to execute the job successfully in the child Q.
... View more
- Tags:
- Hadoop Core
- Hive
- queue
- Upgrade to HDP 2.5.3 : ConcurrentModificationException When Executing Insert Overwrite : Hive
- YARN
- yarn-scheduler
Labels:
- Labels:
-
Apache Hive
-
Apache YARN
03-14-2017
06:54 PM
1 Kudo
Could anyone kindly explain the below "hadoop.proxy" properties set in core-site.xml for all the hadoop components in cluster.
Why should this properties were been and what happends when this properties were been removed.
==================================
## grep -C3 hadoop.proxy core-site.xml
<property> <name>hadoop.proxyuser.falcon.groups</name> <value>*</value> </property>
<property> <name>hadoop.proxyuser.falcon.hosts</name> <value>*</value>
</property> <property>
<name>hadoop.proxyuser.hbase.groups</name> <value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hbase.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hcat.groups</name> <value>*</value>
</property> <property> <name>hadoop.proxyuser.hcat.hosts</name> <value>host01</value>
</property>
<property> <name>hadoop.proxyuser.hdfs.groups</name> <value>*</value>
</property> <property> <name>hadoop.proxyuser.hdfs.hosts</name> <value>*</value> </property>
<property>
<name>hadoop.proxyuser.hive.groups</name>
<value>*</value>
</property> <property>
<name>hadoop.proxyuser.hive.hosts</name> <value>host01</value>
</property>
<property>
<name>hadoop.proxyuser.HTTP.groups</name> <value>*</value>
</property>
<property> <name>hadoop.proxyuser.HTTP.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name> <value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.hosts</name> <value>*</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.groups</name> <value>*</value>
</property>
<property> <name>hadoop.proxyuser.oozie.hosts</name> <value>hosts01</value>
</property> ==================================
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Oozie