Member since
01-16-2014
336
Posts
43
Kudos Received
31
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3313 | 12-20-2017 08:26 PM | |
3326 | 03-09-2017 03:47 PM | |
2788 | 11-18-2016 09:00 AM | |
4837 | 05-18-2016 08:29 PM | |
3725 | 02-29-2016 01:14 AM |
12-20-2017
07:26 AM
1 Kudo
Hi Wilfried, I'm sorry to ask again, but i'm facing the same problem and I don't understand how to configure Dynamic Ressource Pool Configuration to work using orginal user groups (me not hive). I'm using CDH 5.13 with Kerberos and Sentry. As I am using Sentry, impersonation is disabled. My configuration is root |--A |--B On root, submission ACL are set to allow only "sentry" user to submit in this pool On A, submission ACL are set to allow only group A to submit in this pool On B, submission ACL are set to allow only group B to submit in this pool Placement rules are : 1 - "Use the pool Specified at run time, only if the pool exists." 2 - "Use the pool root.[username] and create the pool if it does not exist. " When I submit a query with a user from the group A, using Hue and setting "set mapred.job.queue.name=A;" I got the error : "User hive cannot submit applications to queue root.A" If I add hive to allowed user on root, the query is working fine but both A and B user's can submit query If I add hive to only "A" resource pool, then user from A and B group can submit query to ressource pool A, but none can submit to resource pool B Maybe I am missing an important part, but I don't have the same behavior as you explained and if I add hive in authorized user it will break the ACL's as every user could use all the resource pool. Can you give us the good configuration to have the same behavior as your's ?
... View more
12-08-2017
05:46 AM
What they have done is turn on the partial aggregation via the setting: yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds. That will allow you to grab some of the logs using the command line. We do not support this in CDH although we have the exact same code in CDH is available upstream. We have tested the setting and found that it breaks log access via the different UIs in multiple ways. So you get a working command line in 99% of the cases but when you try to use the RM or AM UIs it breaks almost always. The way it breaks changes over time for the same application. That is not a feature that we can support in the state it is at the moment. Wilfred
... View more
12-08-2017
05:29 AM
You will need to shade the guava that you use in your application. There is no way to replace the guava that is part of CDH with a later release, it will break a number of things. What it looked like from the previous message is that they did not shade it correctly. Wilfred
... View more
06-05-2017
12:26 PM
In my case, neither the ResourceManager nor the NodeManager were up and running. Phew!
... View more
03-10-2017
06:42 AM
the configuration works fine only issue is that the bind user password is not redacted in the advanced configuration snippet and in clear text in the core-site.xml According to the security guide (sensitive data redaction), v5.8.x (not documented for 5.7.x): Redaction of Advanced Configuration Snippet parameters is based on detecting keywords explicitly defined as sensitive in the contents of these parameters. That is, parameters containing the keywords password, key, aws, or secret, will be redacted for users who do not have the required edit privileges I'll open a case to check how to get this working on 5.7.1
... View more
02-14-2017
01:29 AM
--conf "spark.driver.extraClassPath=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/jars/htrace-core-3.1.0-incubating.jar:/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hive/conf:/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hive/lib/*.jar" \ --conf "spark.executor.extraClassPath=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/jars/htrace-core-3.1.0-incubating.jar:/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hive/conf:/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hive/lib/*.jar" \ --conf "spark.driver.extraJavaOptions=-XX:MaxPermSize=1024m -XX:PermSize=256m" \ --conf "spark.executor.extraJavaOptions=-XX:MaxPermSize=1024m -XX:PermSize=256m" \ work for me
... View more
02-13-2017
08:10 PM
Thank you, you are right, when I create a kadmin user on each linux machine, you can successfully submit the task!
... View more
01-08-2017
04:03 PM
You can have differences between the options for the NMs that is not the problem. It could be that the difference in HW used in the NMs requires a different JVM option to be set so it is something that we allow and will also work. However there can not be an empty line in the options. An empty line in the options is passed on to the settings in the script to set the environment etc. That is where it breaks. The empty line breaks the setting into two in the script which should not happen. The empty line(s) should be trimmed before we generate that settings script, which is the jira I filed. Wilfred
... View more
12-01-2016
06:24 AM
No this is not a known issue as far as I know. If you have a support contract please open a support case and we can look into it further for you. Wilfred
... View more
11-22-2016
04:45 AM
Thank you for the answer.
... View more